00:00:00.000 Started by upstream project "autotest-nightly-lts" build number 1822 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3083 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.110 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.111 The recommended git tool is: git 00:00:00.111 using credential 00000000-0000-0000-0000-000000000002 00:00:00.114 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.154 Fetching changes from the remote Git repository 00:00:00.156 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.189 Using shallow fetch with depth 1 00:00:00.189 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.189 > git --version # timeout=10 00:00:00.209 > git --version # 'git version 2.39.2' 00:00:00.209 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.209 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.209 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.045 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.057 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.070 Checking out Revision f620ee97e10840540f53609861ee9b86caa3c192 (FETCH_HEAD) 00:00:07.070 > git config core.sparsecheckout # timeout=10 00:00:07.081 > git read-tree -mu HEAD # timeout=10 00:00:07.103 > git checkout -f f620ee97e10840540f53609861ee9b86caa3c192 # timeout=5 00:00:07.124 Commit message: "change IP of vertiv1 PDU" 00:00:07.125 > git rev-list --no-walk f620ee97e10840540f53609861ee9b86caa3c192 # timeout=10 00:00:07.223 [Pipeline] Start of Pipeline 00:00:07.238 [Pipeline] library 00:00:07.239 Loading library shm_lib@master 00:00:07.240 Library shm_lib@master is cached. Copying from home. 00:00:07.255 [Pipeline] node 00:00:07.264 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:00:07.266 [Pipeline] { 00:00:07.277 [Pipeline] catchError 00:00:07.278 [Pipeline] { 00:00:07.291 [Pipeline] wrap 00:00:07.303 [Pipeline] { 00:00:07.312 [Pipeline] stage 00:00:07.314 [Pipeline] { (Prologue) 00:00:07.333 [Pipeline] echo 00:00:07.334 Node: VM-host-SM9 00:00:07.340 [Pipeline] cleanWs 00:00:07.347 [WS-CLEANUP] Deleting project workspace... 00:00:07.347 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.352 [WS-CLEANUP] done 00:00:07.541 [Pipeline] setCustomBuildProperty 00:00:07.591 [Pipeline] nodesByLabel 00:00:07.592 Found a total of 1 nodes with the 'sorcerer' label 00:00:07.599 [Pipeline] httpRequest 00:00:07.604 HttpMethod: GET 00:00:07.605 URL: http://10.211.164.101/packages/jbp_f620ee97e10840540f53609861ee9b86caa3c192.tar.gz 00:00:07.605 Sending request to url: http://10.211.164.101/packages/jbp_f620ee97e10840540f53609861ee9b86caa3c192.tar.gz 00:00:07.622 Response Code: HTTP/1.1 200 OK 00:00:07.622 Success: Status code 200 is in the accepted range: 200,404 00:00:07.623 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/jbp_f620ee97e10840540f53609861ee9b86caa3c192.tar.gz 00:00:13.538 [Pipeline] sh 00:00:13.816 + tar --no-same-owner -xf jbp_f620ee97e10840540f53609861ee9b86caa3c192.tar.gz 00:00:13.837 [Pipeline] httpRequest 00:00:13.842 HttpMethod: GET 00:00:13.843 URL: http://10.211.164.101/packages/spdk_36faa8c312bf9059b86e0f503d7fd6b43c1498e6.tar.gz 00:00:13.843 Sending request to url: http://10.211.164.101/packages/spdk_36faa8c312bf9059b86e0f503d7fd6b43c1498e6.tar.gz 00:00:13.860 Response Code: HTTP/1.1 200 OK 00:00:13.860 Success: Status code 200 is in the accepted range: 200,404 00:00:13.861 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk_36faa8c312bf9059b86e0f503d7fd6b43c1498e6.tar.gz 00:00:59.730 [Pipeline] sh 00:01:00.061 + tar --no-same-owner -xf spdk_36faa8c312bf9059b86e0f503d7fd6b43c1498e6.tar.gz 00:01:03.393 [Pipeline] sh 00:01:03.672 + git -C spdk log --oneline -n5 00:01:03.672 36faa8c31 bdev/nvme: Fix the case that namespace was removed during reset 00:01:03.672 e2cb5a5ee bdev/nvme: Factor out nvme_ns active/inactive check into a helper function 00:01:03.672 4b134b4ab bdev/nvme: Delay callbacks when the next operation is a failover 00:01:03.672 d2ea4ecb1 llvm/vfio: Suppress checking leaks for `spdk_nvme_ctrlr_alloc_io_qpair` 00:01:03.672 3b33f4333 test/nvme/cuse: Fix typo 00:01:03.692 [Pipeline] writeFile 00:01:03.709 [Pipeline] sh 00:01:03.989 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:04.000 [Pipeline] sh 00:01:04.277 + cat autorun-spdk.conf 00:01:04.277 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:04.277 SPDK_TEST_NVMF=1 00:01:04.277 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:04.277 SPDK_TEST_VFIOUSER=1 00:01:04.277 SPDK_TEST_USDT=1 00:01:04.277 SPDK_RUN_UBSAN=1 00:01:04.277 SPDK_TEST_NVMF_MDNS=1 00:01:04.277 NET_TYPE=virt 00:01:04.277 SPDK_JSONRPC_GO_CLIENT=1 00:01:04.277 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:04.282 RUN_NIGHTLY=1 00:01:04.284 [Pipeline] } 00:01:04.297 [Pipeline] // stage 00:01:04.311 [Pipeline] stage 00:01:04.313 [Pipeline] { (Run VM) 00:01:04.324 [Pipeline] sh 00:01:04.595 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:04.596 + echo 'Start stage prepare_nvme.sh' 00:01:04.596 Start stage prepare_nvme.sh 00:01:04.596 + [[ -n 0 ]] 00:01:04.596 + disk_prefix=ex0 00:01:04.596 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 ]] 00:01:04.596 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/autorun-spdk.conf ]] 00:01:04.596 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/autorun-spdk.conf 00:01:04.596 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:04.596 ++ SPDK_TEST_NVMF=1 00:01:04.596 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:04.596 ++ SPDK_TEST_VFIOUSER=1 00:01:04.596 ++ SPDK_TEST_USDT=1 00:01:04.596 ++ SPDK_RUN_UBSAN=1 00:01:04.596 ++ SPDK_TEST_NVMF_MDNS=1 00:01:04.596 ++ NET_TYPE=virt 00:01:04.596 ++ SPDK_JSONRPC_GO_CLIENT=1 00:01:04.596 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:04.596 ++ RUN_NIGHTLY=1 00:01:04.596 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:01:04.596 + nvme_files=() 00:01:04.596 + declare -A nvme_files 00:01:04.596 + backend_dir=/var/lib/libvirt/images/backends 00:01:04.596 + nvme_files['nvme.img']=5G 00:01:04.596 + nvme_files['nvme-cmb.img']=5G 00:01:04.596 + nvme_files['nvme-multi0.img']=4G 00:01:04.596 + nvme_files['nvme-multi1.img']=4G 00:01:04.596 + nvme_files['nvme-multi2.img']=4G 00:01:04.596 + nvme_files['nvme-openstack.img']=8G 00:01:04.596 + nvme_files['nvme-zns.img']=5G 00:01:04.596 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:04.596 + (( SPDK_TEST_FTL == 1 )) 00:01:04.596 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:04.596 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:04.596 + for nvme in "${!nvme_files[@]}" 00:01:04.596 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi2.img -s 4G 00:01:04.596 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:04.596 + for nvme in "${!nvme_files[@]}" 00:01:04.596 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-cmb.img -s 5G 00:01:04.596 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:04.596 + for nvme in "${!nvme_files[@]}" 00:01:04.596 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-openstack.img -s 8G 00:01:04.596 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:04.596 + for nvme in "${!nvme_files[@]}" 00:01:04.596 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-zns.img -s 5G 00:01:04.596 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:04.596 + for nvme in "${!nvme_files[@]}" 00:01:04.596 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi1.img -s 4G 00:01:04.596 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:04.596 + for nvme in "${!nvme_files[@]}" 00:01:04.596 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi0.img -s 4G 00:01:04.596 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:04.596 + for nvme in "${!nvme_files[@]}" 00:01:04.596 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme.img -s 5G 00:01:04.854 Formatting '/var/lib/libvirt/images/backends/ex0-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:04.854 ++ sudo grep -rl ex0-nvme.img /etc/libvirt/qemu 00:01:04.854 + echo 'End stage prepare_nvme.sh' 00:01:04.854 End stage prepare_nvme.sh 00:01:04.866 [Pipeline] sh 00:01:05.146 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:05.146 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex0-nvme.img -b /var/lib/libvirt/images/backends/ex0-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img -H -a -v -f fedora38 00:01:05.146 00:01:05.146 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk/scripts/vagrant 00:01:05.146 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk 00:01:05.146 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:01:05.146 HELP=0 00:01:05.146 DRY_RUN=0 00:01:05.146 NVME_FILE=/var/lib/libvirt/images/backends/ex0-nvme.img,/var/lib/libvirt/images/backends/ex0-nvme-multi0.img, 00:01:05.146 NVME_DISKS_TYPE=nvme,nvme, 00:01:05.146 NVME_AUTO_CREATE=0 00:01:05.146 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img, 00:01:05.146 NVME_CMB=,, 00:01:05.146 NVME_PMR=,, 00:01:05.147 NVME_ZNS=,, 00:01:05.147 NVME_MS=,, 00:01:05.147 NVME_FDP=,, 00:01:05.147 SPDK_VAGRANT_DISTRO=fedora38 00:01:05.147 SPDK_VAGRANT_VMCPU=10 00:01:05.147 SPDK_VAGRANT_VMRAM=12288 00:01:05.147 SPDK_VAGRANT_PROVIDER=libvirt 00:01:05.147 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:05.147 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:05.147 SPDK_OPENSTACK_NETWORK=0 00:01:05.147 VAGRANT_PACKAGE_BOX=0 00:01:05.147 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:01:05.147 FORCE_DISTRO=true 00:01:05.147 VAGRANT_BOX_VERSION= 00:01:05.147 EXTRA_VAGRANTFILES= 00:01:05.147 NIC_MODEL=e1000 00:01:05.147 00:01:05.147 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt' 00:01:05.147 /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:01:08.432 Bringing machine 'default' up with 'libvirt' provider... 00:01:09.000 ==> default: Creating image (snapshot of base box volume). 00:01:09.259 ==> default: Creating domain with the following settings... 00:01:09.259 ==> default: -- Name: fedora38-38-1.6-1705279005-2131_default_1715651903_2c4aeb4e3bdfa9d1f280 00:01:09.259 ==> default: -- Domain type: kvm 00:01:09.259 ==> default: -- Cpus: 10 00:01:09.259 ==> default: -- Feature: acpi 00:01:09.259 ==> default: -- Feature: apic 00:01:09.259 ==> default: -- Feature: pae 00:01:09.259 ==> default: -- Memory: 12288M 00:01:09.259 ==> default: -- Memory Backing: hugepages: 00:01:09.259 ==> default: -- Management MAC: 00:01:09.259 ==> default: -- Loader: 00:01:09.259 ==> default: -- Nvram: 00:01:09.259 ==> default: -- Base box: spdk/fedora38 00:01:09.259 ==> default: -- Storage pool: default 00:01:09.259 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1705279005-2131_default_1715651903_2c4aeb4e3bdfa9d1f280.img (20G) 00:01:09.259 ==> default: -- Volume Cache: default 00:01:09.259 ==> default: -- Kernel: 00:01:09.259 ==> default: -- Initrd: 00:01:09.259 ==> default: -- Graphics Type: vnc 00:01:09.259 ==> default: -- Graphics Port: -1 00:01:09.259 ==> default: -- Graphics IP: 127.0.0.1 00:01:09.259 ==> default: -- Graphics Password: Not defined 00:01:09.259 ==> default: -- Video Type: cirrus 00:01:09.259 ==> default: -- Video VRAM: 9216 00:01:09.259 ==> default: -- Sound Type: 00:01:09.259 ==> default: -- Keymap: en-us 00:01:09.259 ==> default: -- TPM Path: 00:01:09.259 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:09.259 ==> default: -- Command line args: 00:01:09.259 ==> default: -> value=-device, 00:01:09.259 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:09.259 ==> default: -> value=-drive, 00:01:09.259 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme.img,if=none,id=nvme-0-drive0, 00:01:09.259 ==> default: -> value=-device, 00:01:09.259 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:09.259 ==> default: -> value=-device, 00:01:09.259 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:01:09.259 ==> default: -> value=-drive, 00:01:09.259 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:09.259 ==> default: -> value=-device, 00:01:09.259 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:09.259 ==> default: -> value=-drive, 00:01:09.259 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:09.259 ==> default: -> value=-device, 00:01:09.259 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:09.259 ==> default: -> value=-drive, 00:01:09.259 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:09.259 ==> default: -> value=-device, 00:01:09.259 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:09.259 ==> default: Creating shared folders metadata... 00:01:09.259 ==> default: Starting domain. 00:01:10.638 ==> default: Waiting for domain to get an IP address... 00:01:28.743 ==> default: Waiting for SSH to become available... 00:01:30.117 ==> default: Configuring and enabling network interfaces... 00:01:34.301 default: SSH address: 192.168.121.171:22 00:01:34.301 default: SSH username: vagrant 00:01:34.301 default: SSH auth method: private key 00:01:35.757 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:44.088 ==> default: Mounting SSHFS shared folder... 00:01:44.654 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:44.654 ==> default: Checking Mount.. 00:01:45.588 ==> default: Folder Successfully Mounted! 00:01:45.588 ==> default: Running provisioner: file... 00:01:46.521 default: ~/.gitconfig => .gitconfig 00:01:46.779 00:01:46.779 SUCCESS! 00:01:46.779 00:01:46.779 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt and type "vagrant ssh" to use. 00:01:46.779 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:46.779 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt" to destroy all trace of vm. 00:01:46.779 00:01:46.788 [Pipeline] } 00:01:46.803 [Pipeline] // stage 00:01:46.811 [Pipeline] dir 00:01:46.812 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt 00:01:46.813 [Pipeline] { 00:01:46.823 [Pipeline] catchError 00:01:46.824 [Pipeline] { 00:01:46.836 [Pipeline] sh 00:01:47.135 + vagrant ssh-config --host vagrant 00:01:47.136 + sed -ne /^Host/,$p 00:01:47.136 + tee ssh_conf 00:01:51.344 Host vagrant 00:01:51.344 HostName 192.168.121.171 00:01:51.344 User vagrant 00:01:51.344 Port 22 00:01:51.344 UserKnownHostsFile /dev/null 00:01:51.344 StrictHostKeyChecking no 00:01:51.344 PasswordAuthentication no 00:01:51.344 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1705279005-2131/libvirt/fedora38 00:01:51.344 IdentitiesOnly yes 00:01:51.344 LogLevel FATAL 00:01:51.344 ForwardAgent yes 00:01:51.344 ForwardX11 yes 00:01:51.344 00:01:51.358 [Pipeline] withEnv 00:01:51.361 [Pipeline] { 00:01:51.377 [Pipeline] sh 00:01:51.655 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:51.655 source /etc/os-release 00:01:51.655 [[ -e /image.version ]] && img=$(< /image.version) 00:01:51.655 # Minimal, systemd-like check. 00:01:51.655 if [[ -e /.dockerenv ]]; then 00:01:51.655 # Clear garbage from the node's name: 00:01:51.655 # agt-er_autotest_547-896 -> autotest_547-896 00:01:51.655 # $HOSTNAME is the actual container id 00:01:51.655 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:51.655 if mountpoint -q /etc/hostname; then 00:01:51.655 # We can assume this is a mount from a host where container is running, 00:01:51.655 # so fetch its hostname to easily identify the target swarm worker. 00:01:51.655 container="$(< /etc/hostname) ($agent)" 00:01:51.655 else 00:01:51.655 # Fallback 00:01:51.655 container=$agent 00:01:51.655 fi 00:01:51.655 fi 00:01:51.655 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:51.655 00:01:51.924 [Pipeline] } 00:01:51.943 [Pipeline] // withEnv 00:01:51.952 [Pipeline] setCustomBuildProperty 00:01:51.968 [Pipeline] stage 00:01:51.970 [Pipeline] { (Tests) 00:01:51.991 [Pipeline] sh 00:01:52.269 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:52.600 [Pipeline] timeout 00:01:52.600 Timeout set to expire in 40 min 00:01:52.602 [Pipeline] { 00:01:52.618 [Pipeline] sh 00:01:52.897 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:53.464 HEAD is now at 36faa8c31 bdev/nvme: Fix the case that namespace was removed during reset 00:01:53.477 [Pipeline] sh 00:01:53.756 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:54.030 [Pipeline] sh 00:01:54.307 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:54.580 [Pipeline] sh 00:01:54.861 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant ./autoruner.sh spdk_repo 00:01:55.120 ++ readlink -f spdk_repo 00:01:55.120 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:55.120 + [[ -n /home/vagrant/spdk_repo ]] 00:01:55.120 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:55.120 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:55.120 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:55.120 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:55.120 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:55.120 + cd /home/vagrant/spdk_repo 00:01:55.120 + source /etc/os-release 00:01:55.120 ++ NAME='Fedora Linux' 00:01:55.120 ++ VERSION='38 (Cloud Edition)' 00:01:55.120 ++ ID=fedora 00:01:55.120 ++ VERSION_ID=38 00:01:55.120 ++ VERSION_CODENAME= 00:01:55.120 ++ PLATFORM_ID=platform:f38 00:01:55.120 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:55.120 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:55.120 ++ LOGO=fedora-logo-icon 00:01:55.120 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:55.120 ++ HOME_URL=https://fedoraproject.org/ 00:01:55.120 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:55.120 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:55.120 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:55.120 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:55.120 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:55.120 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:55.120 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:55.120 ++ SUPPORT_END=2024-05-14 00:01:55.120 ++ VARIANT='Cloud Edition' 00:01:55.120 ++ VARIANT_ID=cloud 00:01:55.120 + uname -a 00:01:55.120 Linux fedora38-cloud-1705279005-2131 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:55.120 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:55.120 Hugepages 00:01:55.120 node hugesize free / total 00:01:55.120 node0 1048576kB 0 / 0 00:01:55.120 node0 2048kB 0 / 0 00:01:55.120 00:01:55.120 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:55.120 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:55.120 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:55.378 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:55.378 + rm -f /tmp/spdk-ld-path 00:01:55.378 + source autorun-spdk.conf 00:01:55.378 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:55.378 ++ SPDK_TEST_NVMF=1 00:01:55.378 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:55.378 ++ SPDK_TEST_VFIOUSER=1 00:01:55.378 ++ SPDK_TEST_USDT=1 00:01:55.378 ++ SPDK_RUN_UBSAN=1 00:01:55.378 ++ SPDK_TEST_NVMF_MDNS=1 00:01:55.378 ++ NET_TYPE=virt 00:01:55.378 ++ SPDK_JSONRPC_GO_CLIENT=1 00:01:55.378 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:55.378 ++ RUN_NIGHTLY=1 00:01:55.378 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:55.378 + [[ -n '' ]] 00:01:55.378 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:55.378 + for M in /var/spdk/build-*-manifest.txt 00:01:55.378 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:55.378 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:55.378 + for M in /var/spdk/build-*-manifest.txt 00:01:55.378 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:55.378 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:55.378 ++ uname 00:01:55.378 + [[ Linux == \L\i\n\u\x ]] 00:01:55.378 + sudo dmesg -T 00:01:55.378 + sudo dmesg --clear 00:01:55.378 + dmesg_pid=5147 00:01:55.378 + sudo dmesg -Tw 00:01:55.378 + [[ Fedora Linux == FreeBSD ]] 00:01:55.378 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:55.378 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:55.378 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:55.378 + [[ -x /usr/src/fio-static/fio ]] 00:01:55.378 + export FIO_BIN=/usr/src/fio-static/fio 00:01:55.378 + FIO_BIN=/usr/src/fio-static/fio 00:01:55.378 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:55.378 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:55.378 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:55.378 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:55.379 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:55.379 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:55.379 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:55.379 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:55.379 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:55.379 Test configuration: 00:01:55.379 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:55.379 SPDK_TEST_NVMF=1 00:01:55.379 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:55.379 SPDK_TEST_VFIOUSER=1 00:01:55.379 SPDK_TEST_USDT=1 00:01:55.379 SPDK_RUN_UBSAN=1 00:01:55.379 SPDK_TEST_NVMF_MDNS=1 00:01:55.379 NET_TYPE=virt 00:01:55.379 SPDK_JSONRPC_GO_CLIENT=1 00:01:55.379 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:55.379 RUN_NIGHTLY=1 01:59:09 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:55.379 01:59:09 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:55.379 01:59:09 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:55.379 01:59:09 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:55.379 01:59:09 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:55.379 01:59:09 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:55.379 01:59:09 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:55.379 01:59:09 -- paths/export.sh@5 -- $ export PATH 00:01:55.379 01:59:09 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:55.379 01:59:09 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:55.379 01:59:09 -- common/autobuild_common.sh@435 -- $ date +%s 00:01:55.379 01:59:09 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1715651949.XXXXXX 00:01:55.379 01:59:09 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1715651949.8gVOPR 00:01:55.379 01:59:09 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:01:55.379 01:59:09 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:01:55.379 01:59:09 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:55.379 01:59:09 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:55.379 01:59:09 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:55.379 01:59:09 -- common/autobuild_common.sh@451 -- $ get_config_params 00:01:55.379 01:59:09 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:01:55.379 01:59:09 -- common/autotest_common.sh@10 -- $ set +x 00:01:55.379 01:59:09 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-avahi --with-golang' 00:01:55.379 01:59:09 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:55.379 01:59:09 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:55.379 01:59:09 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:55.379 01:59:09 -- spdk/autobuild.sh@16 -- $ date -u 00:01:55.379 Tue May 14 01:59:09 AM UTC 2024 00:01:55.379 01:59:09 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:55.379 LTS-24-g36faa8c31 00:01:55.379 01:59:09 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:55.379 01:59:09 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:55.379 01:59:09 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:55.379 01:59:09 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:55.379 01:59:09 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:55.379 01:59:09 -- common/autotest_common.sh@10 -- $ set +x 00:01:55.379 ************************************ 00:01:55.379 START TEST ubsan 00:01:55.379 ************************************ 00:01:55.379 using ubsan 00:01:55.379 01:59:09 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:01:55.379 00:01:55.379 real 0m0.000s 00:01:55.379 user 0m0.000s 00:01:55.379 sys 0m0.000s 00:01:55.379 01:59:09 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:55.379 ************************************ 00:01:55.379 01:59:09 -- common/autotest_common.sh@10 -- $ set +x 00:01:55.379 END TEST ubsan 00:01:55.379 ************************************ 00:01:55.637 01:59:09 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:55.637 01:59:09 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:55.637 01:59:09 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:55.637 01:59:09 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:55.637 01:59:09 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:55.637 01:59:09 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:55.637 01:59:09 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:55.637 01:59:09 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:55.637 01:59:09 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-avahi --with-golang --with-shared 00:01:55.637 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:55.637 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:56.204 Using 'verbs' RDMA provider 00:02:08.985 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:02:21.180 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:02:21.180 go version go1.21.1 linux/amd64 00:02:21.180 Creating mk/config.mk...done. 00:02:21.180 Creating mk/cc.flags.mk...done. 00:02:21.180 Type 'make' to build. 00:02:21.180 01:59:34 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:21.180 01:59:35 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:02:21.180 01:59:35 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:21.180 01:59:35 -- common/autotest_common.sh@10 -- $ set +x 00:02:21.180 ************************************ 00:02:21.180 START TEST make 00:02:21.180 ************************************ 00:02:21.180 01:59:35 -- common/autotest_common.sh@1104 -- $ make -j10 00:02:21.180 make[1]: Nothing to be done for 'all'. 00:02:22.552 The Meson build system 00:02:22.552 Version: 1.3.1 00:02:22.552 Source dir: /home/vagrant/spdk_repo/spdk/libvfio-user 00:02:22.552 Build dir: /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:22.552 Build type: native build 00:02:22.552 Project name: libvfio-user 00:02:22.552 Project version: 0.0.1 00:02:22.552 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:22.552 C linker for the host machine: cc ld.bfd 2.39-16 00:02:22.552 Host machine cpu family: x86_64 00:02:22.552 Host machine cpu: x86_64 00:02:22.552 Run-time dependency threads found: YES 00:02:22.552 Library dl found: YES 00:02:22.552 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:22.552 Run-time dependency json-c found: YES 0.17 00:02:22.552 Run-time dependency cmocka found: YES 1.1.7 00:02:22.552 Program pytest-3 found: NO 00:02:22.552 Program flake8 found: NO 00:02:22.552 Program misspell-fixer found: NO 00:02:22.552 Program restructuredtext-lint found: NO 00:02:22.552 Program valgrind found: YES (/usr/bin/valgrind) 00:02:22.552 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:22.552 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:22.552 Compiler for C supports arguments -Wwrite-strings: YES 00:02:22.552 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:22.552 Program test-lspci.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-lspci.sh) 00:02:22.552 Program test-linkage.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-linkage.sh) 00:02:22.552 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:22.552 Build targets in project: 8 00:02:22.552 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:22.552 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:22.552 00:02:22.552 libvfio-user 0.0.1 00:02:22.552 00:02:22.552 User defined options 00:02:22.552 buildtype : debug 00:02:22.552 default_library: shared 00:02:22.552 libdir : /usr/local/lib 00:02:22.552 00:02:22.553 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:23.118 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:02:23.118 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:23.118 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:23.376 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:23.376 [4/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:23.376 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:23.376 [6/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:23.376 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:23.376 [8/37] Compiling C object samples/null.p/null.c.o 00:02:23.376 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:23.376 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:23.376 [11/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:23.376 [12/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:23.376 [13/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:23.376 [14/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:23.634 [15/37] Compiling C object samples/server.p/server.c.o 00:02:23.634 [16/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:23.634 [17/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:23.634 [18/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:23.634 [19/37] Compiling C object samples/client.p/client.c.o 00:02:23.634 [20/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:23.634 [21/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:23.634 [22/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:23.634 [23/37] Linking target samples/client 00:02:23.892 [24/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:23.892 [25/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:23.892 [26/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:23.892 [27/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:23.892 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:23.892 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:23.892 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:02:23.892 [31/37] Linking target test/unit_tests 00:02:24.151 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:24.151 [33/37] Linking target samples/gpio-pci-idio-16 00:02:24.151 [34/37] Linking target samples/shadow_ioeventfd_server 00:02:24.151 [35/37] Linking target samples/lspci 00:02:24.151 [36/37] Linking target samples/server 00:02:24.151 [37/37] Linking target samples/null 00:02:24.409 INFO: autodetecting backend as ninja 00:02:24.409 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:24.409 DESTDIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user meson install --quiet -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:25.007 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:02:25.007 ninja: no work to do. 00:02:37.200 The Meson build system 00:02:37.200 Version: 1.3.1 00:02:37.200 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:37.200 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:37.200 Build type: native build 00:02:37.200 Program cat found: YES (/usr/bin/cat) 00:02:37.200 Project name: DPDK 00:02:37.200 Project version: 23.11.0 00:02:37.200 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:37.200 C linker for the host machine: cc ld.bfd 2.39-16 00:02:37.200 Host machine cpu family: x86_64 00:02:37.200 Host machine cpu: x86_64 00:02:37.200 Message: ## Building in Developer Mode ## 00:02:37.200 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:37.200 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:37.200 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:37.200 Program python3 found: YES (/usr/bin/python3) 00:02:37.200 Program cat found: YES (/usr/bin/cat) 00:02:37.200 Compiler for C supports arguments -march=native: YES 00:02:37.200 Checking for size of "void *" : 8 00:02:37.200 Checking for size of "void *" : 8 (cached) 00:02:37.200 Library m found: YES 00:02:37.200 Library numa found: YES 00:02:37.200 Has header "numaif.h" : YES 00:02:37.200 Library fdt found: NO 00:02:37.200 Library execinfo found: NO 00:02:37.200 Has header "execinfo.h" : YES 00:02:37.200 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:37.200 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:37.200 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:37.200 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:37.200 Run-time dependency openssl found: YES 3.0.9 00:02:37.200 Run-time dependency libpcap found: YES 1.10.4 00:02:37.200 Has header "pcap.h" with dependency libpcap: YES 00:02:37.200 Compiler for C supports arguments -Wcast-qual: YES 00:02:37.201 Compiler for C supports arguments -Wdeprecated: YES 00:02:37.201 Compiler for C supports arguments -Wformat: YES 00:02:37.201 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:37.201 Compiler for C supports arguments -Wformat-security: NO 00:02:37.201 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:37.201 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:37.201 Compiler for C supports arguments -Wnested-externs: YES 00:02:37.201 Compiler for C supports arguments -Wold-style-definition: YES 00:02:37.201 Compiler for C supports arguments -Wpointer-arith: YES 00:02:37.201 Compiler for C supports arguments -Wsign-compare: YES 00:02:37.201 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:37.201 Compiler for C supports arguments -Wundef: YES 00:02:37.201 Compiler for C supports arguments -Wwrite-strings: YES 00:02:37.201 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:37.201 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:37.201 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:37.201 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:37.201 Program objdump found: YES (/usr/bin/objdump) 00:02:37.201 Compiler for C supports arguments -mavx512f: YES 00:02:37.201 Checking if "AVX512 checking" compiles: YES 00:02:37.201 Fetching value of define "__SSE4_2__" : 1 00:02:37.201 Fetching value of define "__AES__" : 1 00:02:37.201 Fetching value of define "__AVX__" : 1 00:02:37.201 Fetching value of define "__AVX2__" : 1 00:02:37.201 Fetching value of define "__AVX512BW__" : (undefined) 00:02:37.201 Fetching value of define "__AVX512CD__" : (undefined) 00:02:37.201 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:37.201 Fetching value of define "__AVX512F__" : (undefined) 00:02:37.201 Fetching value of define "__AVX512VL__" : (undefined) 00:02:37.201 Fetching value of define "__PCLMUL__" : 1 00:02:37.201 Fetching value of define "__RDRND__" : 1 00:02:37.201 Fetching value of define "__RDSEED__" : 1 00:02:37.201 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:37.201 Fetching value of define "__znver1__" : (undefined) 00:02:37.201 Fetching value of define "__znver2__" : (undefined) 00:02:37.201 Fetching value of define "__znver3__" : (undefined) 00:02:37.201 Fetching value of define "__znver4__" : (undefined) 00:02:37.201 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:37.201 Message: lib/log: Defining dependency "log" 00:02:37.201 Message: lib/kvargs: Defining dependency "kvargs" 00:02:37.201 Message: lib/telemetry: Defining dependency "telemetry" 00:02:37.201 Checking for function "getentropy" : NO 00:02:37.201 Message: lib/eal: Defining dependency "eal" 00:02:37.201 Message: lib/ring: Defining dependency "ring" 00:02:37.201 Message: lib/rcu: Defining dependency "rcu" 00:02:37.201 Message: lib/mempool: Defining dependency "mempool" 00:02:37.201 Message: lib/mbuf: Defining dependency "mbuf" 00:02:37.201 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:37.201 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:37.201 Compiler for C supports arguments -mpclmul: YES 00:02:37.201 Compiler for C supports arguments -maes: YES 00:02:37.201 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:37.201 Compiler for C supports arguments -mavx512bw: YES 00:02:37.201 Compiler for C supports arguments -mavx512dq: YES 00:02:37.201 Compiler for C supports arguments -mavx512vl: YES 00:02:37.201 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:37.201 Compiler for C supports arguments -mavx2: YES 00:02:37.201 Compiler for C supports arguments -mavx: YES 00:02:37.201 Message: lib/net: Defining dependency "net" 00:02:37.201 Message: lib/meter: Defining dependency "meter" 00:02:37.201 Message: lib/ethdev: Defining dependency "ethdev" 00:02:37.201 Message: lib/pci: Defining dependency "pci" 00:02:37.201 Message: lib/cmdline: Defining dependency "cmdline" 00:02:37.201 Message: lib/hash: Defining dependency "hash" 00:02:37.201 Message: lib/timer: Defining dependency "timer" 00:02:37.201 Message: lib/compressdev: Defining dependency "compressdev" 00:02:37.201 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:37.201 Message: lib/dmadev: Defining dependency "dmadev" 00:02:37.201 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:37.201 Message: lib/power: Defining dependency "power" 00:02:37.201 Message: lib/reorder: Defining dependency "reorder" 00:02:37.201 Message: lib/security: Defining dependency "security" 00:02:37.201 Has header "linux/userfaultfd.h" : YES 00:02:37.201 Has header "linux/vduse.h" : YES 00:02:37.201 Message: lib/vhost: Defining dependency "vhost" 00:02:37.201 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:37.201 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:37.201 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:37.201 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:37.201 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:37.201 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:37.201 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:37.201 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:37.201 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:37.201 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:37.201 Program doxygen found: YES (/usr/bin/doxygen) 00:02:37.201 Configuring doxy-api-html.conf using configuration 00:02:37.201 Configuring doxy-api-man.conf using configuration 00:02:37.201 Program mandb found: YES (/usr/bin/mandb) 00:02:37.201 Program sphinx-build found: NO 00:02:37.201 Configuring rte_build_config.h using configuration 00:02:37.201 Message: 00:02:37.201 ================= 00:02:37.201 Applications Enabled 00:02:37.201 ================= 00:02:37.201 00:02:37.201 apps: 00:02:37.201 00:02:37.201 00:02:37.201 Message: 00:02:37.201 ================= 00:02:37.201 Libraries Enabled 00:02:37.201 ================= 00:02:37.201 00:02:37.201 libs: 00:02:37.201 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:37.201 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:37.201 cryptodev, dmadev, power, reorder, security, vhost, 00:02:37.201 00:02:37.201 Message: 00:02:37.201 =============== 00:02:37.201 Drivers Enabled 00:02:37.201 =============== 00:02:37.201 00:02:37.201 common: 00:02:37.201 00:02:37.201 bus: 00:02:37.201 pci, vdev, 00:02:37.201 mempool: 00:02:37.201 ring, 00:02:37.201 dma: 00:02:37.201 00:02:37.201 net: 00:02:37.201 00:02:37.201 crypto: 00:02:37.201 00:02:37.201 compress: 00:02:37.201 00:02:37.201 vdpa: 00:02:37.201 00:02:37.201 00:02:37.201 Message: 00:02:37.201 ================= 00:02:37.201 Content Skipped 00:02:37.201 ================= 00:02:37.201 00:02:37.201 apps: 00:02:37.201 dumpcap: explicitly disabled via build config 00:02:37.201 graph: explicitly disabled via build config 00:02:37.201 pdump: explicitly disabled via build config 00:02:37.201 proc-info: explicitly disabled via build config 00:02:37.201 test-acl: explicitly disabled via build config 00:02:37.201 test-bbdev: explicitly disabled via build config 00:02:37.201 test-cmdline: explicitly disabled via build config 00:02:37.201 test-compress-perf: explicitly disabled via build config 00:02:37.201 test-crypto-perf: explicitly disabled via build config 00:02:37.201 test-dma-perf: explicitly disabled via build config 00:02:37.201 test-eventdev: explicitly disabled via build config 00:02:37.201 test-fib: explicitly disabled via build config 00:02:37.201 test-flow-perf: explicitly disabled via build config 00:02:37.201 test-gpudev: explicitly disabled via build config 00:02:37.201 test-mldev: explicitly disabled via build config 00:02:37.201 test-pipeline: explicitly disabled via build config 00:02:37.201 test-pmd: explicitly disabled via build config 00:02:37.201 test-regex: explicitly disabled via build config 00:02:37.201 test-sad: explicitly disabled via build config 00:02:37.201 test-security-perf: explicitly disabled via build config 00:02:37.201 00:02:37.201 libs: 00:02:37.201 metrics: explicitly disabled via build config 00:02:37.201 acl: explicitly disabled via build config 00:02:37.201 bbdev: explicitly disabled via build config 00:02:37.201 bitratestats: explicitly disabled via build config 00:02:37.201 bpf: explicitly disabled via build config 00:02:37.201 cfgfile: explicitly disabled via build config 00:02:37.201 distributor: explicitly disabled via build config 00:02:37.201 efd: explicitly disabled via build config 00:02:37.201 eventdev: explicitly disabled via build config 00:02:37.201 dispatcher: explicitly disabled via build config 00:02:37.201 gpudev: explicitly disabled via build config 00:02:37.201 gro: explicitly disabled via build config 00:02:37.201 gso: explicitly disabled via build config 00:02:37.201 ip_frag: explicitly disabled via build config 00:02:37.201 jobstats: explicitly disabled via build config 00:02:37.201 latencystats: explicitly disabled via build config 00:02:37.201 lpm: explicitly disabled via build config 00:02:37.201 member: explicitly disabled via build config 00:02:37.201 pcapng: explicitly disabled via build config 00:02:37.201 rawdev: explicitly disabled via build config 00:02:37.201 regexdev: explicitly disabled via build config 00:02:37.201 mldev: explicitly disabled via build config 00:02:37.201 rib: explicitly disabled via build config 00:02:37.201 sched: explicitly disabled via build config 00:02:37.201 stack: explicitly disabled via build config 00:02:37.201 ipsec: explicitly disabled via build config 00:02:37.201 pdcp: explicitly disabled via build config 00:02:37.201 fib: explicitly disabled via build config 00:02:37.201 port: explicitly disabled via build config 00:02:37.201 pdump: explicitly disabled via build config 00:02:37.201 table: explicitly disabled via build config 00:02:37.201 pipeline: explicitly disabled via build config 00:02:37.201 graph: explicitly disabled via build config 00:02:37.201 node: explicitly disabled via build config 00:02:37.202 00:02:37.202 drivers: 00:02:37.202 common/cpt: not in enabled drivers build config 00:02:37.202 common/dpaax: not in enabled drivers build config 00:02:37.202 common/iavf: not in enabled drivers build config 00:02:37.202 common/idpf: not in enabled drivers build config 00:02:37.202 common/mvep: not in enabled drivers build config 00:02:37.202 common/octeontx: not in enabled drivers build config 00:02:37.202 bus/auxiliary: not in enabled drivers build config 00:02:37.202 bus/cdx: not in enabled drivers build config 00:02:37.202 bus/dpaa: not in enabled drivers build config 00:02:37.202 bus/fslmc: not in enabled drivers build config 00:02:37.202 bus/ifpga: not in enabled drivers build config 00:02:37.202 bus/platform: not in enabled drivers build config 00:02:37.202 bus/vmbus: not in enabled drivers build config 00:02:37.202 common/cnxk: not in enabled drivers build config 00:02:37.202 common/mlx5: not in enabled drivers build config 00:02:37.202 common/nfp: not in enabled drivers build config 00:02:37.202 common/qat: not in enabled drivers build config 00:02:37.202 common/sfc_efx: not in enabled drivers build config 00:02:37.202 mempool/bucket: not in enabled drivers build config 00:02:37.202 mempool/cnxk: not in enabled drivers build config 00:02:37.202 mempool/dpaa: not in enabled drivers build config 00:02:37.202 mempool/dpaa2: not in enabled drivers build config 00:02:37.202 mempool/octeontx: not in enabled drivers build config 00:02:37.202 mempool/stack: not in enabled drivers build config 00:02:37.202 dma/cnxk: not in enabled drivers build config 00:02:37.202 dma/dpaa: not in enabled drivers build config 00:02:37.202 dma/dpaa2: not in enabled drivers build config 00:02:37.202 dma/hisilicon: not in enabled drivers build config 00:02:37.202 dma/idxd: not in enabled drivers build config 00:02:37.202 dma/ioat: not in enabled drivers build config 00:02:37.202 dma/skeleton: not in enabled drivers build config 00:02:37.202 net/af_packet: not in enabled drivers build config 00:02:37.202 net/af_xdp: not in enabled drivers build config 00:02:37.202 net/ark: not in enabled drivers build config 00:02:37.202 net/atlantic: not in enabled drivers build config 00:02:37.202 net/avp: not in enabled drivers build config 00:02:37.202 net/axgbe: not in enabled drivers build config 00:02:37.202 net/bnx2x: not in enabled drivers build config 00:02:37.202 net/bnxt: not in enabled drivers build config 00:02:37.202 net/bonding: not in enabled drivers build config 00:02:37.202 net/cnxk: not in enabled drivers build config 00:02:37.202 net/cpfl: not in enabled drivers build config 00:02:37.202 net/cxgbe: not in enabled drivers build config 00:02:37.202 net/dpaa: not in enabled drivers build config 00:02:37.202 net/dpaa2: not in enabled drivers build config 00:02:37.202 net/e1000: not in enabled drivers build config 00:02:37.202 net/ena: not in enabled drivers build config 00:02:37.202 net/enetc: not in enabled drivers build config 00:02:37.202 net/enetfec: not in enabled drivers build config 00:02:37.202 net/enic: not in enabled drivers build config 00:02:37.202 net/failsafe: not in enabled drivers build config 00:02:37.202 net/fm10k: not in enabled drivers build config 00:02:37.202 net/gve: not in enabled drivers build config 00:02:37.202 net/hinic: not in enabled drivers build config 00:02:37.202 net/hns3: not in enabled drivers build config 00:02:37.202 net/i40e: not in enabled drivers build config 00:02:37.202 net/iavf: not in enabled drivers build config 00:02:37.202 net/ice: not in enabled drivers build config 00:02:37.202 net/idpf: not in enabled drivers build config 00:02:37.202 net/igc: not in enabled drivers build config 00:02:37.202 net/ionic: not in enabled drivers build config 00:02:37.202 net/ipn3ke: not in enabled drivers build config 00:02:37.202 net/ixgbe: not in enabled drivers build config 00:02:37.202 net/mana: not in enabled drivers build config 00:02:37.202 net/memif: not in enabled drivers build config 00:02:37.202 net/mlx4: not in enabled drivers build config 00:02:37.202 net/mlx5: not in enabled drivers build config 00:02:37.202 net/mvneta: not in enabled drivers build config 00:02:37.202 net/mvpp2: not in enabled drivers build config 00:02:37.202 net/netvsc: not in enabled drivers build config 00:02:37.202 net/nfb: not in enabled drivers build config 00:02:37.202 net/nfp: not in enabled drivers build config 00:02:37.202 net/ngbe: not in enabled drivers build config 00:02:37.202 net/null: not in enabled drivers build config 00:02:37.202 net/octeontx: not in enabled drivers build config 00:02:37.202 net/octeon_ep: not in enabled drivers build config 00:02:37.202 net/pcap: not in enabled drivers build config 00:02:37.202 net/pfe: not in enabled drivers build config 00:02:37.202 net/qede: not in enabled drivers build config 00:02:37.202 net/ring: not in enabled drivers build config 00:02:37.202 net/sfc: not in enabled drivers build config 00:02:37.202 net/softnic: not in enabled drivers build config 00:02:37.202 net/tap: not in enabled drivers build config 00:02:37.202 net/thunderx: not in enabled drivers build config 00:02:37.202 net/txgbe: not in enabled drivers build config 00:02:37.202 net/vdev_netvsc: not in enabled drivers build config 00:02:37.202 net/vhost: not in enabled drivers build config 00:02:37.202 net/virtio: not in enabled drivers build config 00:02:37.202 net/vmxnet3: not in enabled drivers build config 00:02:37.202 raw/*: missing internal dependency, "rawdev" 00:02:37.202 crypto/armv8: not in enabled drivers build config 00:02:37.202 crypto/bcmfs: not in enabled drivers build config 00:02:37.202 crypto/caam_jr: not in enabled drivers build config 00:02:37.202 crypto/ccp: not in enabled drivers build config 00:02:37.202 crypto/cnxk: not in enabled drivers build config 00:02:37.202 crypto/dpaa_sec: not in enabled drivers build config 00:02:37.202 crypto/dpaa2_sec: not in enabled drivers build config 00:02:37.202 crypto/ipsec_mb: not in enabled drivers build config 00:02:37.202 crypto/mlx5: not in enabled drivers build config 00:02:37.202 crypto/mvsam: not in enabled drivers build config 00:02:37.202 crypto/nitrox: not in enabled drivers build config 00:02:37.202 crypto/null: not in enabled drivers build config 00:02:37.202 crypto/octeontx: not in enabled drivers build config 00:02:37.202 crypto/openssl: not in enabled drivers build config 00:02:37.202 crypto/scheduler: not in enabled drivers build config 00:02:37.202 crypto/uadk: not in enabled drivers build config 00:02:37.202 crypto/virtio: not in enabled drivers build config 00:02:37.202 compress/isal: not in enabled drivers build config 00:02:37.202 compress/mlx5: not in enabled drivers build config 00:02:37.202 compress/octeontx: not in enabled drivers build config 00:02:37.202 compress/zlib: not in enabled drivers build config 00:02:37.202 regex/*: missing internal dependency, "regexdev" 00:02:37.202 ml/*: missing internal dependency, "mldev" 00:02:37.202 vdpa/ifc: not in enabled drivers build config 00:02:37.202 vdpa/mlx5: not in enabled drivers build config 00:02:37.202 vdpa/nfp: not in enabled drivers build config 00:02:37.202 vdpa/sfc: not in enabled drivers build config 00:02:37.202 event/*: missing internal dependency, "eventdev" 00:02:37.202 baseband/*: missing internal dependency, "bbdev" 00:02:37.202 gpu/*: missing internal dependency, "gpudev" 00:02:37.202 00:02:37.202 00:02:37.202 Build targets in project: 85 00:02:37.202 00:02:37.202 DPDK 23.11.0 00:02:37.202 00:02:37.202 User defined options 00:02:37.202 buildtype : debug 00:02:37.202 default_library : shared 00:02:37.202 libdir : lib 00:02:37.202 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:37.202 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:02:37.202 c_link_args : 00:02:37.202 cpu_instruction_set: native 00:02:37.202 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:37.202 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:37.202 enable_docs : false 00:02:37.202 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:37.202 enable_kmods : false 00:02:37.202 tests : false 00:02:37.202 00:02:37.202 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:37.202 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:37.461 [1/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:37.461 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:37.461 [3/265] Linking static target lib/librte_kvargs.a 00:02:37.461 [4/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:37.461 [5/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:37.461 [6/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:37.461 [7/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:37.461 [8/265] Linking static target lib/librte_log.a 00:02:37.461 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:37.719 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:37.977 [11/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.977 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:38.234 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:38.234 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:38.234 [15/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:38.493 [16/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:38.493 [17/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:38.493 [18/265] Linking static target lib/librte_telemetry.a 00:02:38.493 [19/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.493 [20/265] Linking target lib/librte_log.so.24.0 00:02:38.750 [21/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:38.750 [22/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:38.751 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:38.751 [24/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:38.751 [25/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:39.008 [26/265] Linking target lib/librte_kvargs.so.24.0 00:02:39.266 [27/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:39.266 [28/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:39.266 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:39.266 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:39.266 [31/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:39.266 [32/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.266 [33/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:39.523 [34/265] Linking target lib/librte_telemetry.so.24.0 00:02:39.523 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:39.781 [36/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:39.781 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:39.781 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:39.781 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:39.781 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:40.039 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:40.039 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:40.039 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:40.039 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:40.039 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:40.297 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:40.554 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:40.554 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:40.812 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:40.812 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:41.128 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:41.128 [52/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:41.128 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:41.128 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:41.128 [55/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:41.128 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:41.386 [57/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:41.386 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:41.386 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:41.644 [60/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:41.644 [61/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:41.644 [62/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:41.644 [63/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:41.903 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:42.161 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:42.161 [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:42.161 [67/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:42.161 [68/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:42.418 [69/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:42.418 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:42.418 [71/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:42.676 [72/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:42.676 [73/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:42.676 [74/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:42.676 [75/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:42.676 [76/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:42.676 [77/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:42.676 [78/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:42.934 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:43.191 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:43.449 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:43.449 [82/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:43.449 [83/265] Linking static target lib/librte_ring.a 00:02:43.449 [84/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:43.449 [85/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:43.707 [86/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:43.707 [87/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:43.707 [88/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:43.707 [89/265] Linking static target lib/librte_eal.a 00:02:43.707 [90/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:43.965 [91/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:43.965 [92/265] Linking static target lib/librte_rcu.a 00:02:43.965 [93/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.223 [94/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:44.223 [95/265] Linking static target lib/librte_mempool.a 00:02:44.481 [96/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:44.481 [97/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.739 [98/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:44.739 [99/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:44.739 [100/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:44.739 [101/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:44.739 [102/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:44.739 [103/265] Linking static target lib/librte_mbuf.a 00:02:44.996 [104/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:45.254 [105/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:45.512 [106/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:45.512 [107/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:45.512 [108/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:45.512 [109/265] Linking static target lib/librte_net.a 00:02:45.769 [110/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.769 [111/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:45.769 [112/265] Linking static target lib/librte_meter.a 00:02:46.026 [113/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:46.026 [114/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.026 [115/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.026 [116/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:46.342 [117/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:46.342 [118/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.923 [119/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:47.488 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:47.488 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:47.488 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:47.749 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:47.749 [124/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:47.749 [125/265] Linking static target lib/librte_pci.a 00:02:48.006 [126/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:48.006 [127/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:48.006 [128/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:48.263 [129/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:48.263 [130/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.263 [131/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:48.520 [132/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:48.520 [133/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:48.520 [134/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:48.520 [135/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:48.520 [136/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:48.520 [137/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:48.777 [138/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:48.777 [139/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:48.777 [140/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:48.777 [141/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:48.777 [142/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:48.777 [143/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:49.034 [144/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:49.293 [145/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:49.293 [146/265] Linking static target lib/librte_cmdline.a 00:02:49.857 [147/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:49.857 [148/265] Linking static target lib/librte_timer.a 00:02:49.857 [149/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:49.857 [150/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:49.857 [151/265] Linking static target lib/librte_ethdev.a 00:02:49.857 [152/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:50.116 [153/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:50.374 [154/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:50.374 [155/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:50.374 [156/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:50.374 [157/265] Linking static target lib/librte_compressdev.a 00:02:50.374 [158/265] Linking static target lib/librte_hash.a 00:02:50.631 [159/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.892 [160/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:50.892 [161/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:51.157 [162/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:51.157 [163/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:51.417 [164/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.417 [165/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:51.417 [166/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:51.674 [167/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:51.674 [168/265] Linking static target lib/librte_dmadev.a 00:02:51.674 [169/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.931 [170/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.931 [171/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:51.931 [172/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:52.188 [173/265] Linking static target lib/librte_cryptodev.a 00:02:52.188 [174/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:52.188 [175/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:52.445 [176/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.445 [177/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:52.702 [178/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:52.959 [179/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:52.959 [180/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:52.959 [181/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:53.216 [182/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:53.216 [183/265] Linking static target lib/librte_power.a 00:02:53.473 [184/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:53.731 [185/265] Linking static target lib/librte_security.a 00:02:53.731 [186/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:53.731 [187/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:53.731 [188/265] Linking static target lib/librte_reorder.a 00:02:53.988 [189/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:54.246 [190/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:54.503 [191/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:54.503 [192/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.503 [193/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.759 [194/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.017 [195/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:55.017 [196/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.583 [197/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:55.583 [198/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:55.840 [199/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:55.840 [200/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:56.096 [201/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:56.096 [202/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:56.096 [203/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:56.096 [204/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:56.365 [205/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:56.630 [206/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:56.630 [207/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:56.630 [208/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:56.887 [209/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:56.887 [210/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:56.887 [211/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:56.887 [212/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:56.887 [213/265] Linking static target drivers/librte_bus_pci.a 00:02:56.887 [214/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:56.887 [215/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:56.887 [216/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:56.887 [217/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:56.887 [218/265] Linking static target drivers/librte_bus_vdev.a 00:02:56.887 [219/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:57.144 [220/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:57.144 [221/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:57.144 [222/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:57.144 [223/265] Linking static target drivers/librte_mempool_ring.a 00:02:57.144 [224/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.402 [225/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.659 [226/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:57.659 [227/265] Linking static target lib/librte_vhost.a 00:02:57.659 [228/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.916 [229/265] Linking target lib/librte_eal.so.24.0 00:02:57.916 [230/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:58.173 [231/265] Linking target drivers/librte_bus_vdev.so.24.0 00:02:58.173 [232/265] Linking target lib/librte_ring.so.24.0 00:02:58.174 [233/265] Linking target lib/librte_pci.so.24.0 00:02:58.174 [234/265] Linking target lib/librte_timer.so.24.0 00:02:58.174 [235/265] Linking target lib/librte_meter.so.24.0 00:02:58.174 [236/265] Linking target lib/librte_dmadev.so.24.0 00:02:58.174 [237/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:58.174 [238/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:58.174 [239/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:58.174 [240/265] Linking target lib/librte_mempool.so.24.0 00:02:58.174 [241/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:58.174 [242/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:58.174 [243/265] Linking target lib/librte_rcu.so.24.0 00:02:58.431 [244/265] Linking target drivers/librte_bus_pci.so.24.0 00:02:58.431 [245/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:58.431 [246/265] Linking target drivers/librte_mempool_ring.so.24.0 00:02:58.431 [247/265] Linking target lib/librte_mbuf.so.24.0 00:02:58.431 [248/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:58.688 [249/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:58.688 [250/265] Linking target lib/librte_reorder.so.24.0 00:02:58.688 [251/265] Linking target lib/librte_net.so.24.0 00:02:58.688 [252/265] Linking target lib/librte_cryptodev.so.24.0 00:02:58.688 [253/265] Linking target lib/librte_compressdev.so.24.0 00:02:58.956 [254/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:58.956 [255/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:58.956 [256/265] Linking target lib/librte_hash.so.24.0 00:02:58.957 [257/265] Linking target lib/librte_cmdline.so.24.0 00:02:58.957 [258/265] Linking target lib/librte_security.so.24.0 00:02:58.957 [259/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.957 [260/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:59.234 [261/265] Linking target lib/librte_ethdev.so.24.0 00:02:59.234 [262/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.235 [263/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:59.235 [264/265] Linking target lib/librte_power.so.24.0 00:02:59.235 [265/265] Linking target lib/librte_vhost.so.24.0 00:02:59.235 INFO: autodetecting backend as ninja 00:02:59.235 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:01.143 CC lib/ut/ut.o 00:03:01.143 CC lib/log/log.o 00:03:01.143 CC lib/ut_mock/mock.o 00:03:01.143 CC lib/log/log_deprecated.o 00:03:01.143 CC lib/log/log_flags.o 00:03:01.143 LIB libspdk_ut_mock.a 00:03:01.401 LIB libspdk_log.a 00:03:01.401 SO libspdk_ut_mock.so.5.0 00:03:01.401 LIB libspdk_ut.a 00:03:01.401 SO libspdk_log.so.6.1 00:03:01.401 SO libspdk_ut.so.1.0 00:03:01.401 SYMLINK libspdk_ut_mock.so 00:03:01.401 SYMLINK libspdk_log.so 00:03:01.401 SYMLINK libspdk_ut.so 00:03:01.659 CXX lib/trace_parser/trace.o 00:03:01.659 CC lib/dma/dma.o 00:03:01.659 CC lib/ioat/ioat.o 00:03:01.659 CC lib/util/base64.o 00:03:01.659 CC lib/util/bit_array.o 00:03:01.659 CC lib/util/cpuset.o 00:03:01.659 CC lib/util/crc16.o 00:03:01.659 CC lib/util/crc32.o 00:03:01.659 CC lib/util/crc32c.o 00:03:01.659 CC lib/vfio_user/host/vfio_user_pci.o 00:03:01.659 CC lib/util/crc32_ieee.o 00:03:01.659 CC lib/util/crc64.o 00:03:01.659 LIB libspdk_dma.a 00:03:01.659 CC lib/util/dif.o 00:03:01.916 SO libspdk_dma.so.3.0 00:03:01.916 CC lib/vfio_user/host/vfio_user.o 00:03:01.916 CC lib/util/fd.o 00:03:01.916 LIB libspdk_ioat.a 00:03:01.916 SYMLINK libspdk_dma.so 00:03:01.916 CC lib/util/file.o 00:03:01.916 CC lib/util/hexlify.o 00:03:01.916 SO libspdk_ioat.so.6.0 00:03:01.916 SYMLINK libspdk_ioat.so 00:03:01.916 CC lib/util/iov.o 00:03:01.916 CC lib/util/math.o 00:03:01.916 CC lib/util/pipe.o 00:03:01.916 CC lib/util/strerror_tls.o 00:03:01.916 CC lib/util/string.o 00:03:01.916 CC lib/util/uuid.o 00:03:02.175 LIB libspdk_vfio_user.a 00:03:02.175 CC lib/util/fd_group.o 00:03:02.175 SO libspdk_vfio_user.so.4.0 00:03:02.175 CC lib/util/xor.o 00:03:02.175 CC lib/util/zipf.o 00:03:02.175 SYMLINK libspdk_vfio_user.so 00:03:02.432 LIB libspdk_util.a 00:03:02.432 SO libspdk_util.so.8.0 00:03:02.974 SYMLINK libspdk_util.so 00:03:02.974 LIB libspdk_trace_parser.a 00:03:02.974 SO libspdk_trace_parser.so.4.0 00:03:02.974 CC lib/conf/conf.o 00:03:02.974 CC lib/idxd/idxd.o 00:03:02.974 CC lib/idxd/idxd_user.o 00:03:02.974 CC lib/json/json_util.o 00:03:02.974 CC lib/json/json_parse.o 00:03:02.974 CC lib/json/json_write.o 00:03:02.974 CC lib/vmd/vmd.o 00:03:02.974 CC lib/env_dpdk/env.o 00:03:02.974 CC lib/rdma/common.o 00:03:02.974 SYMLINK libspdk_trace_parser.so 00:03:02.974 CC lib/rdma/rdma_verbs.o 00:03:03.233 CC lib/env_dpdk/memory.o 00:03:03.233 CC lib/env_dpdk/pci.o 00:03:03.233 LIB libspdk_conf.a 00:03:03.233 CC lib/vmd/led.o 00:03:03.233 SO libspdk_conf.so.5.0 00:03:03.233 CC lib/env_dpdk/init.o 00:03:03.233 LIB libspdk_json.a 00:03:03.233 LIB libspdk_rdma.a 00:03:03.233 SYMLINK libspdk_conf.so 00:03:03.233 CC lib/env_dpdk/threads.o 00:03:03.489 SO libspdk_json.so.5.1 00:03:03.489 SO libspdk_rdma.so.5.0 00:03:03.489 SYMLINK libspdk_json.so 00:03:03.489 SYMLINK libspdk_rdma.so 00:03:03.489 CC lib/env_dpdk/pci_ioat.o 00:03:03.489 CC lib/env_dpdk/pci_virtio.o 00:03:03.489 LIB libspdk_idxd.a 00:03:03.489 SO libspdk_idxd.so.11.0 00:03:03.489 CC lib/env_dpdk/pci_vmd.o 00:03:03.749 LIB libspdk_vmd.a 00:03:03.749 CC lib/env_dpdk/pci_idxd.o 00:03:03.749 CC lib/jsonrpc/jsonrpc_server.o 00:03:03.749 SYMLINK libspdk_idxd.so 00:03:03.749 SO libspdk_vmd.so.5.0 00:03:03.749 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:03.749 CC lib/jsonrpc/jsonrpc_client.o 00:03:03.749 CC lib/env_dpdk/pci_event.o 00:03:03.749 SYMLINK libspdk_vmd.so 00:03:03.749 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:03.749 CC lib/env_dpdk/sigbus_handler.o 00:03:04.011 CC lib/env_dpdk/pci_dpdk.o 00:03:04.011 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:04.011 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:04.011 LIB libspdk_jsonrpc.a 00:03:04.268 SO libspdk_jsonrpc.so.5.1 00:03:04.268 SYMLINK libspdk_jsonrpc.so 00:03:04.526 CC lib/rpc/rpc.o 00:03:04.526 LIB libspdk_env_dpdk.a 00:03:04.526 LIB libspdk_rpc.a 00:03:04.782 SO libspdk_rpc.so.5.0 00:03:04.782 SYMLINK libspdk_rpc.so 00:03:04.782 SO libspdk_env_dpdk.so.13.0 00:03:04.782 CC lib/notify/notify.o 00:03:04.782 CC lib/notify/notify_rpc.o 00:03:04.782 CC lib/sock/sock.o 00:03:04.782 CC lib/sock/sock_rpc.o 00:03:04.782 CC lib/trace/trace.o 00:03:04.782 CC lib/trace/trace_flags.o 00:03:04.782 CC lib/trace/trace_rpc.o 00:03:05.040 SYMLINK libspdk_env_dpdk.so 00:03:05.040 LIB libspdk_notify.a 00:03:05.040 SO libspdk_notify.so.5.0 00:03:05.297 LIB libspdk_trace.a 00:03:05.297 SYMLINK libspdk_notify.so 00:03:05.297 SO libspdk_trace.so.9.0 00:03:05.297 LIB libspdk_sock.a 00:03:05.297 SYMLINK libspdk_trace.so 00:03:05.297 SO libspdk_sock.so.8.0 00:03:05.556 SYMLINK libspdk_sock.so 00:03:05.556 CC lib/thread/thread.o 00:03:05.556 CC lib/thread/iobuf.o 00:03:05.556 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:05.556 CC lib/nvme/nvme_ctrlr.o 00:03:05.556 CC lib/nvme/nvme_fabric.o 00:03:05.556 CC lib/nvme/nvme_ns_cmd.o 00:03:05.556 CC lib/nvme/nvme_ns.o 00:03:05.556 CC lib/nvme/nvme_pcie_common.o 00:03:05.556 CC lib/nvme/nvme_pcie.o 00:03:05.556 CC lib/nvme/nvme_qpair.o 00:03:05.814 CC lib/nvme/nvme.o 00:03:06.378 CC lib/nvme/nvme_quirks.o 00:03:06.378 CC lib/nvme/nvme_transport.o 00:03:06.642 CC lib/nvme/nvme_discovery.o 00:03:06.642 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:06.900 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:06.900 CC lib/nvme/nvme_tcp.o 00:03:07.157 CC lib/nvme/nvme_opal.o 00:03:07.157 CC lib/nvme/nvme_io_msg.o 00:03:07.157 CC lib/nvme/nvme_poll_group.o 00:03:07.157 LIB libspdk_thread.a 00:03:07.157 SO libspdk_thread.so.9.0 00:03:07.414 SYMLINK libspdk_thread.so 00:03:07.414 CC lib/nvme/nvme_zns.o 00:03:07.414 CC lib/nvme/nvme_cuse.o 00:03:07.414 CC lib/accel/accel.o 00:03:07.670 CC lib/blob/blobstore.o 00:03:07.928 CC lib/init/json_config.o 00:03:07.928 CC lib/virtio/virtio.o 00:03:08.185 CC lib/vfu_tgt/tgt_endpoint.o 00:03:08.185 CC lib/virtio/virtio_vhost_user.o 00:03:08.185 CC lib/virtio/virtio_vfio_user.o 00:03:08.185 CC lib/blob/request.o 00:03:08.185 CC lib/blob/zeroes.o 00:03:08.444 CC lib/init/subsystem.o 00:03:08.444 CC lib/blob/blob_bs_dev.o 00:03:08.701 CC lib/vfu_tgt/tgt_rpc.o 00:03:08.701 CC lib/virtio/virtio_pci.o 00:03:08.701 CC lib/accel/accel_rpc.o 00:03:08.701 CC lib/init/subsystem_rpc.o 00:03:08.701 CC lib/accel/accel_sw.o 00:03:08.701 CC lib/nvme/nvme_vfio_user.o 00:03:08.701 CC lib/nvme/nvme_rdma.o 00:03:08.701 CC lib/init/rpc.o 00:03:08.958 LIB libspdk_vfu_tgt.a 00:03:08.958 LIB libspdk_accel.a 00:03:08.958 LIB libspdk_virtio.a 00:03:08.958 SO libspdk_vfu_tgt.so.2.0 00:03:08.958 LIB libspdk_init.a 00:03:08.958 SO libspdk_accel.so.14.0 00:03:08.958 SO libspdk_virtio.so.6.0 00:03:08.958 SO libspdk_init.so.4.0 00:03:09.215 SYMLINK libspdk_vfu_tgt.so 00:03:09.215 SYMLINK libspdk_virtio.so 00:03:09.215 SYMLINK libspdk_accel.so 00:03:09.215 SYMLINK libspdk_init.so 00:03:09.215 CC lib/bdev/bdev.o 00:03:09.215 CC lib/bdev/bdev_rpc.o 00:03:09.215 CC lib/bdev/part.o 00:03:09.215 CC lib/bdev/bdev_zone.o 00:03:09.215 CC lib/bdev/scsi_nvme.o 00:03:09.472 CC lib/event/app.o 00:03:09.472 CC lib/event/reactor.o 00:03:09.729 CC lib/event/log_rpc.o 00:03:09.729 CC lib/event/app_rpc.o 00:03:09.729 CC lib/event/scheduler_static.o 00:03:09.986 LIB libspdk_event.a 00:03:10.244 SO libspdk_event.so.12.0 00:03:10.244 SYMLINK libspdk_event.so 00:03:10.505 LIB libspdk_nvme.a 00:03:10.763 SO libspdk_nvme.so.12.0 00:03:11.020 SYMLINK libspdk_nvme.so 00:03:11.277 LIB libspdk_blob.a 00:03:11.534 SO libspdk_blob.so.10.1 00:03:11.534 SYMLINK libspdk_blob.so 00:03:11.792 CC lib/blobfs/tree.o 00:03:11.792 CC lib/blobfs/blobfs.o 00:03:11.792 CC lib/lvol/lvol.o 00:03:12.358 LIB libspdk_bdev.a 00:03:12.358 SO libspdk_bdev.so.14.0 00:03:12.616 SYMLINK libspdk_bdev.so 00:03:12.616 LIB libspdk_lvol.a 00:03:12.616 SO libspdk_lvol.so.9.1 00:03:12.873 LIB libspdk_blobfs.a 00:03:12.873 CC lib/scsi/dev.o 00:03:12.873 CC lib/ublk/ublk.o 00:03:12.873 CC lib/scsi/lun.o 00:03:12.873 CC lib/nbd/nbd.o 00:03:12.873 CC lib/ublk/ublk_rpc.o 00:03:12.873 CC lib/scsi/port.o 00:03:12.873 CC lib/nvmf/ctrlr.o 00:03:12.873 CC lib/ftl/ftl_core.o 00:03:12.873 SO libspdk_blobfs.so.9.0 00:03:12.873 SYMLINK libspdk_lvol.so 00:03:12.873 CC lib/ftl/ftl_init.o 00:03:12.873 SYMLINK libspdk_blobfs.so 00:03:12.873 CC lib/ftl/ftl_layout.o 00:03:13.131 CC lib/ftl/ftl_debug.o 00:03:13.131 CC lib/ftl/ftl_io.o 00:03:13.131 CC lib/scsi/scsi.o 00:03:13.131 CC lib/nvmf/ctrlr_discovery.o 00:03:13.131 CC lib/nvmf/ctrlr_bdev.o 00:03:13.131 CC lib/nvmf/subsystem.o 00:03:13.400 CC lib/scsi/scsi_bdev.o 00:03:13.400 CC lib/nvmf/nvmf.o 00:03:13.400 CC lib/nbd/nbd_rpc.o 00:03:13.400 LIB libspdk_ublk.a 00:03:13.400 CC lib/ftl/ftl_sb.o 00:03:13.400 SO libspdk_ublk.so.2.0 00:03:13.666 CC lib/scsi/scsi_pr.o 00:03:13.666 SYMLINK libspdk_ublk.so 00:03:13.666 CC lib/scsi/scsi_rpc.o 00:03:13.666 LIB libspdk_nbd.a 00:03:13.666 SO libspdk_nbd.so.6.0 00:03:13.666 SYMLINK libspdk_nbd.so 00:03:13.666 CC lib/nvmf/nvmf_rpc.o 00:03:13.924 CC lib/nvmf/transport.o 00:03:13.924 CC lib/ftl/ftl_l2p.o 00:03:13.924 CC lib/nvmf/tcp.o 00:03:13.924 CC lib/nvmf/vfio_user.o 00:03:14.183 CC lib/nvmf/rdma.o 00:03:14.183 CC lib/scsi/task.o 00:03:14.183 CC lib/ftl/ftl_l2p_flat.o 00:03:14.441 LIB libspdk_scsi.a 00:03:14.441 CC lib/ftl/ftl_nv_cache.o 00:03:14.751 SO libspdk_scsi.so.8.0 00:03:14.751 SYMLINK libspdk_scsi.so 00:03:15.008 CC lib/ftl/ftl_band.o 00:03:15.008 CC lib/iscsi/conn.o 00:03:15.008 CC lib/iscsi/init_grp.o 00:03:15.008 CC lib/iscsi/iscsi.o 00:03:15.266 CC lib/iscsi/md5.o 00:03:15.266 CC lib/ftl/ftl_band_ops.o 00:03:15.266 CC lib/iscsi/param.o 00:03:15.523 CC lib/iscsi/portal_grp.o 00:03:15.780 CC lib/iscsi/tgt_node.o 00:03:15.780 CC lib/iscsi/iscsi_subsystem.o 00:03:15.780 CC lib/iscsi/iscsi_rpc.o 00:03:15.780 CC lib/vhost/vhost.o 00:03:16.037 CC lib/iscsi/task.o 00:03:16.037 CC lib/ftl/ftl_writer.o 00:03:16.037 CC lib/vhost/vhost_rpc.o 00:03:16.295 CC lib/vhost/vhost_scsi.o 00:03:16.295 CC lib/vhost/vhost_blk.o 00:03:16.295 CC lib/vhost/rte_vhost_user.o 00:03:16.295 CC lib/ftl/ftl_rq.o 00:03:16.295 CC lib/ftl/ftl_reloc.o 00:03:16.551 CC lib/ftl/ftl_l2p_cache.o 00:03:16.551 CC lib/ftl/ftl_p2l.o 00:03:16.808 CC lib/ftl/mngt/ftl_mngt.o 00:03:16.808 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:16.808 LIB libspdk_nvmf.a 00:03:16.809 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:17.066 SO libspdk_nvmf.so.17.0 00:03:17.066 LIB libspdk_iscsi.a 00:03:17.066 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:17.066 SO libspdk_iscsi.so.7.0 00:03:17.066 SYMLINK libspdk_nvmf.so 00:03:17.066 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:17.322 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:17.322 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:17.322 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:17.322 SYMLINK libspdk_iscsi.so 00:03:17.322 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:17.322 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:17.322 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:17.322 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:17.322 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:17.578 CC lib/ftl/utils/ftl_conf.o 00:03:17.578 CC lib/ftl/utils/ftl_md.o 00:03:17.578 CC lib/ftl/utils/ftl_mempool.o 00:03:17.578 CC lib/ftl/utils/ftl_bitmap.o 00:03:17.578 CC lib/ftl/utils/ftl_property.o 00:03:17.578 LIB libspdk_vhost.a 00:03:17.578 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:17.578 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:17.578 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:17.578 SO libspdk_vhost.so.7.1 00:03:17.835 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:17.835 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:17.835 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:17.835 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:17.835 SYMLINK libspdk_vhost.so 00:03:17.835 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:17.835 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:17.835 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:17.835 CC lib/ftl/base/ftl_base_dev.o 00:03:18.093 CC lib/ftl/base/ftl_base_bdev.o 00:03:18.093 CC lib/ftl/ftl_trace.o 00:03:18.351 LIB libspdk_ftl.a 00:03:18.607 SO libspdk_ftl.so.8.0 00:03:19.170 SYMLINK libspdk_ftl.so 00:03:19.170 CC module/env_dpdk/env_dpdk_rpc.o 00:03:19.170 CC module/vfu_device/vfu_virtio.o 00:03:19.426 CC module/accel/ioat/accel_ioat.o 00:03:19.427 CC module/blob/bdev/blob_bdev.o 00:03:19.427 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:19.427 CC module/sock/posix/posix.o 00:03:19.427 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:19.427 CC module/scheduler/gscheduler/gscheduler.o 00:03:19.427 CC module/accel/dsa/accel_dsa.o 00:03:19.427 CC module/accel/error/accel_error.o 00:03:19.427 LIB libspdk_env_dpdk_rpc.a 00:03:19.427 SO libspdk_env_dpdk_rpc.so.5.0 00:03:19.684 LIB libspdk_scheduler_dpdk_governor.a 00:03:19.684 SO libspdk_scheduler_dpdk_governor.so.3.0 00:03:19.684 LIB libspdk_scheduler_gscheduler.a 00:03:19.684 SYMLINK libspdk_env_dpdk_rpc.so 00:03:19.684 CC module/accel/dsa/accel_dsa_rpc.o 00:03:19.684 SO libspdk_scheduler_gscheduler.so.3.0 00:03:19.684 CC module/accel/ioat/accel_ioat_rpc.o 00:03:19.684 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:19.684 CC module/accel/error/accel_error_rpc.o 00:03:19.684 LIB libspdk_scheduler_dynamic.a 00:03:19.684 SYMLINK libspdk_scheduler_gscheduler.so 00:03:19.684 CC module/vfu_device/vfu_virtio_blk.o 00:03:19.684 CC module/vfu_device/vfu_virtio_scsi.o 00:03:19.684 SO libspdk_scheduler_dynamic.so.3.0 00:03:19.941 LIB libspdk_blob_bdev.a 00:03:19.941 SYMLINK libspdk_scheduler_dynamic.so 00:03:19.941 CC module/vfu_device/vfu_virtio_rpc.o 00:03:19.941 SO libspdk_blob_bdev.so.10.1 00:03:19.941 CC module/accel/iaa/accel_iaa.o 00:03:19.941 LIB libspdk_accel_dsa.a 00:03:19.941 LIB libspdk_accel_ioat.a 00:03:19.941 LIB libspdk_accel_error.a 00:03:19.941 SO libspdk_accel_dsa.so.4.0 00:03:19.941 SYMLINK libspdk_blob_bdev.so 00:03:19.941 SO libspdk_accel_ioat.so.5.0 00:03:19.941 SO libspdk_accel_error.so.1.0 00:03:20.199 SYMLINK libspdk_accel_ioat.so 00:03:20.199 SYMLINK libspdk_accel_dsa.so 00:03:20.199 SYMLINK libspdk_accel_error.so 00:03:20.199 CC module/accel/iaa/accel_iaa_rpc.o 00:03:20.199 CC module/bdev/delay/vbdev_delay.o 00:03:20.199 CC module/bdev/error/vbdev_error.o 00:03:20.456 CC module/bdev/lvol/vbdev_lvol.o 00:03:20.456 CC module/blobfs/bdev/blobfs_bdev.o 00:03:20.456 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:20.456 LIB libspdk_accel_iaa.a 00:03:20.456 CC module/bdev/gpt/gpt.o 00:03:20.456 LIB libspdk_vfu_device.a 00:03:20.456 CC module/bdev/malloc/bdev_malloc.o 00:03:20.456 SO libspdk_accel_iaa.so.2.0 00:03:20.456 SO libspdk_vfu_device.so.2.0 00:03:20.457 SYMLINK libspdk_accel_iaa.so 00:03:20.457 CC module/bdev/gpt/vbdev_gpt.o 00:03:20.457 LIB libspdk_sock_posix.a 00:03:20.713 SYMLINK libspdk_vfu_device.so 00:03:20.713 SO libspdk_sock_posix.so.5.0 00:03:20.713 LIB libspdk_blobfs_bdev.a 00:03:20.713 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:20.713 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:20.713 SO libspdk_blobfs_bdev.so.5.0 00:03:20.713 CC module/bdev/error/vbdev_error_rpc.o 00:03:20.713 SYMLINK libspdk_sock_posix.so 00:03:20.713 CC module/bdev/null/bdev_null.o 00:03:20.713 SYMLINK libspdk_blobfs_bdev.so 00:03:20.970 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:20.970 CC module/bdev/null/bdev_null_rpc.o 00:03:20.970 CC module/bdev/nvme/bdev_nvme.o 00:03:20.970 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:20.970 LIB libspdk_bdev_delay.a 00:03:20.970 LIB libspdk_bdev_lvol.a 00:03:20.970 LIB libspdk_bdev_error.a 00:03:20.970 LIB libspdk_bdev_gpt.a 00:03:20.970 SO libspdk_bdev_delay.so.5.0 00:03:20.970 LIB libspdk_bdev_malloc.a 00:03:20.970 SO libspdk_bdev_lvol.so.5.0 00:03:20.970 SO libspdk_bdev_error.so.5.0 00:03:20.970 SO libspdk_bdev_gpt.so.5.0 00:03:21.228 SO libspdk_bdev_malloc.so.5.0 00:03:21.228 SYMLINK libspdk_bdev_error.so 00:03:21.228 SYMLINK libspdk_bdev_lvol.so 00:03:21.228 SYMLINK libspdk_bdev_delay.so 00:03:21.228 CC module/bdev/nvme/nvme_rpc.o 00:03:21.228 CC module/bdev/nvme/bdev_mdns_client.o 00:03:21.228 SYMLINK libspdk_bdev_gpt.so 00:03:21.228 SYMLINK libspdk_bdev_malloc.so 00:03:21.228 CC module/bdev/nvme/vbdev_opal.o 00:03:21.228 CC module/bdev/passthru/vbdev_passthru.o 00:03:21.228 LIB libspdk_bdev_null.a 00:03:21.228 SO libspdk_bdev_null.so.5.0 00:03:21.228 CC module/bdev/raid/bdev_raid.o 00:03:21.228 CC module/bdev/split/vbdev_split.o 00:03:21.492 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:21.492 SYMLINK libspdk_bdev_null.so 00:03:21.492 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:21.492 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:21.492 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:21.492 CC module/bdev/split/vbdev_split_rpc.o 00:03:21.750 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:21.750 CC module/bdev/raid/bdev_raid_rpc.o 00:03:22.006 LIB libspdk_bdev_passthru.a 00:03:22.006 LIB libspdk_bdev_split.a 00:03:22.006 SO libspdk_bdev_passthru.so.5.0 00:03:22.006 CC module/bdev/aio/bdev_aio.o 00:03:22.006 SO libspdk_bdev_split.so.5.0 00:03:22.006 CC module/bdev/ftl/bdev_ftl.o 00:03:22.006 SYMLINK libspdk_bdev_passthru.so 00:03:22.006 CC module/bdev/aio/bdev_aio_rpc.o 00:03:22.006 LIB libspdk_bdev_zone_block.a 00:03:22.006 CC module/bdev/raid/bdev_raid_sb.o 00:03:22.006 CC module/bdev/iscsi/bdev_iscsi.o 00:03:22.006 SYMLINK libspdk_bdev_split.so 00:03:22.006 CC module/bdev/raid/raid0.o 00:03:22.006 SO libspdk_bdev_zone_block.so.5.0 00:03:22.264 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:22.264 SYMLINK libspdk_bdev_zone_block.so 00:03:22.264 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:22.264 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:22.264 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:22.522 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:22.522 CC module/bdev/raid/raid1.o 00:03:22.522 CC module/bdev/raid/concat.o 00:03:22.522 LIB libspdk_bdev_aio.a 00:03:22.522 LIB libspdk_bdev_ftl.a 00:03:22.522 SO libspdk_bdev_aio.so.5.0 00:03:22.522 SO libspdk_bdev_ftl.so.5.0 00:03:22.779 SYMLINK libspdk_bdev_aio.so 00:03:22.779 SYMLINK libspdk_bdev_ftl.so 00:03:22.779 LIB libspdk_bdev_iscsi.a 00:03:22.779 SO libspdk_bdev_iscsi.so.5.0 00:03:22.779 LIB libspdk_bdev_raid.a 00:03:22.779 SYMLINK libspdk_bdev_iscsi.so 00:03:23.038 SO libspdk_bdev_raid.so.5.0 00:03:23.038 SYMLINK libspdk_bdev_raid.so 00:03:23.038 LIB libspdk_bdev_virtio.a 00:03:23.038 SO libspdk_bdev_virtio.so.5.0 00:03:23.295 SYMLINK libspdk_bdev_virtio.so 00:03:23.858 LIB libspdk_bdev_nvme.a 00:03:23.858 SO libspdk_bdev_nvme.so.6.0 00:03:24.158 SYMLINK libspdk_bdev_nvme.so 00:03:24.415 CC module/event/subsystems/vmd/vmd.o 00:03:24.415 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:24.415 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:24.415 CC module/event/subsystems/scheduler/scheduler.o 00:03:24.415 CC module/event/subsystems/sock/sock.o 00:03:24.415 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:24.415 CC module/event/subsystems/iobuf/iobuf.o 00:03:24.415 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:24.415 LIB libspdk_event_sock.a 00:03:24.415 LIB libspdk_event_vhost_blk.a 00:03:24.415 SO libspdk_event_sock.so.4.0 00:03:24.415 SO libspdk_event_vhost_blk.so.2.0 00:03:24.415 LIB libspdk_event_vmd.a 00:03:24.415 LIB libspdk_event_scheduler.a 00:03:24.415 SYMLINK libspdk_event_sock.so 00:03:24.415 LIB libspdk_event_vfu_tgt.a 00:03:24.415 LIB libspdk_event_iobuf.a 00:03:24.674 SYMLINK libspdk_event_vhost_blk.so 00:03:24.674 SO libspdk_event_scheduler.so.3.0 00:03:24.674 SO libspdk_event_vmd.so.5.0 00:03:24.674 SO libspdk_event_vfu_tgt.so.2.0 00:03:24.674 SO libspdk_event_iobuf.so.2.0 00:03:24.674 SYMLINK libspdk_event_scheduler.so 00:03:24.674 SYMLINK libspdk_event_vmd.so 00:03:24.674 SYMLINK libspdk_event_vfu_tgt.so 00:03:24.674 SYMLINK libspdk_event_iobuf.so 00:03:24.932 CC module/event/subsystems/accel/accel.o 00:03:24.932 LIB libspdk_event_accel.a 00:03:24.932 SO libspdk_event_accel.so.5.0 00:03:25.190 SYMLINK libspdk_event_accel.so 00:03:25.190 CC module/event/subsystems/bdev/bdev.o 00:03:25.448 LIB libspdk_event_bdev.a 00:03:25.448 SO libspdk_event_bdev.so.5.0 00:03:25.448 SYMLINK libspdk_event_bdev.so 00:03:25.706 CC module/event/subsystems/scsi/scsi.o 00:03:25.706 CC module/event/subsystems/ublk/ublk.o 00:03:25.706 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:25.706 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:25.706 CC module/event/subsystems/nbd/nbd.o 00:03:25.964 LIB libspdk_event_nbd.a 00:03:25.964 LIB libspdk_event_ublk.a 00:03:25.964 SO libspdk_event_nbd.so.5.0 00:03:25.964 LIB libspdk_event_scsi.a 00:03:25.964 SO libspdk_event_ublk.so.2.0 00:03:25.964 SYMLINK libspdk_event_nbd.so 00:03:25.964 SO libspdk_event_scsi.so.5.0 00:03:25.964 LIB libspdk_event_nvmf.a 00:03:25.964 SYMLINK libspdk_event_ublk.so 00:03:25.964 SYMLINK libspdk_event_scsi.so 00:03:25.964 SO libspdk_event_nvmf.so.5.0 00:03:26.222 SYMLINK libspdk_event_nvmf.so 00:03:26.222 CC module/event/subsystems/iscsi/iscsi.o 00:03:26.222 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:26.479 LIB libspdk_event_iscsi.a 00:03:26.479 LIB libspdk_event_vhost_scsi.a 00:03:26.479 SO libspdk_event_iscsi.so.5.0 00:03:26.479 SO libspdk_event_vhost_scsi.so.2.0 00:03:26.479 SYMLINK libspdk_event_iscsi.so 00:03:26.479 SYMLINK libspdk_event_vhost_scsi.so 00:03:26.479 SO libspdk.so.5.0 00:03:26.479 SYMLINK libspdk.so 00:03:26.737 TEST_HEADER include/spdk/accel.h 00:03:26.737 TEST_HEADER include/spdk/accel_module.h 00:03:26.737 TEST_HEADER include/spdk/assert.h 00:03:26.737 CXX app/trace/trace.o 00:03:26.737 TEST_HEADER include/spdk/barrier.h 00:03:26.737 TEST_HEADER include/spdk/base64.h 00:03:26.737 TEST_HEADER include/spdk/bdev.h 00:03:26.737 TEST_HEADER include/spdk/bdev_module.h 00:03:26.737 TEST_HEADER include/spdk/bdev_zone.h 00:03:26.737 TEST_HEADER include/spdk/bit_array.h 00:03:26.737 TEST_HEADER include/spdk/bit_pool.h 00:03:26.737 TEST_HEADER include/spdk/blob_bdev.h 00:03:26.737 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:26.737 TEST_HEADER include/spdk/blobfs.h 00:03:26.737 TEST_HEADER include/spdk/blob.h 00:03:26.737 TEST_HEADER include/spdk/conf.h 00:03:26.737 TEST_HEADER include/spdk/config.h 00:03:26.737 TEST_HEADER include/spdk/cpuset.h 00:03:26.737 TEST_HEADER include/spdk/crc16.h 00:03:26.737 TEST_HEADER include/spdk/crc32.h 00:03:26.737 TEST_HEADER include/spdk/crc64.h 00:03:26.737 TEST_HEADER include/spdk/dif.h 00:03:26.737 TEST_HEADER include/spdk/dma.h 00:03:26.737 CC test/event/event_perf/event_perf.o 00:03:26.737 TEST_HEADER include/spdk/endian.h 00:03:26.737 TEST_HEADER include/spdk/env_dpdk.h 00:03:26.737 TEST_HEADER include/spdk/env.h 00:03:26.737 TEST_HEADER include/spdk/event.h 00:03:26.737 CC examples/accel/perf/accel_perf.o 00:03:26.737 TEST_HEADER include/spdk/fd_group.h 00:03:26.737 TEST_HEADER include/spdk/fd.h 00:03:26.737 TEST_HEADER include/spdk/file.h 00:03:26.737 TEST_HEADER include/spdk/ftl.h 00:03:26.737 TEST_HEADER include/spdk/gpt_spec.h 00:03:26.737 TEST_HEADER include/spdk/hexlify.h 00:03:26.737 TEST_HEADER include/spdk/histogram_data.h 00:03:26.737 TEST_HEADER include/spdk/idxd.h 00:03:26.737 TEST_HEADER include/spdk/idxd_spec.h 00:03:26.737 TEST_HEADER include/spdk/init.h 00:03:26.737 TEST_HEADER include/spdk/ioat.h 00:03:26.737 TEST_HEADER include/spdk/ioat_spec.h 00:03:26.737 TEST_HEADER include/spdk/iscsi_spec.h 00:03:26.737 TEST_HEADER include/spdk/json.h 00:03:26.737 CC test/app/bdev_svc/bdev_svc.o 00:03:26.737 TEST_HEADER include/spdk/jsonrpc.h 00:03:26.737 TEST_HEADER include/spdk/likely.h 00:03:26.737 TEST_HEADER include/spdk/log.h 00:03:26.994 TEST_HEADER include/spdk/lvol.h 00:03:26.994 TEST_HEADER include/spdk/memory.h 00:03:26.994 CC test/dma/test_dma/test_dma.o 00:03:26.994 TEST_HEADER include/spdk/mmio.h 00:03:26.994 CC test/accel/dif/dif.o 00:03:26.994 TEST_HEADER include/spdk/nbd.h 00:03:26.994 CC test/blobfs/mkfs/mkfs.o 00:03:26.994 TEST_HEADER include/spdk/notify.h 00:03:26.994 TEST_HEADER include/spdk/nvme.h 00:03:26.994 CC test/bdev/bdevio/bdevio.o 00:03:26.994 TEST_HEADER include/spdk/nvme_intel.h 00:03:26.994 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:26.994 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:26.994 TEST_HEADER include/spdk/nvme_spec.h 00:03:26.994 TEST_HEADER include/spdk/nvme_zns.h 00:03:26.994 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:26.994 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:26.994 CC test/env/mem_callbacks/mem_callbacks.o 00:03:26.994 TEST_HEADER include/spdk/nvmf.h 00:03:26.994 TEST_HEADER include/spdk/nvmf_spec.h 00:03:26.994 TEST_HEADER include/spdk/nvmf_transport.h 00:03:26.994 TEST_HEADER include/spdk/opal.h 00:03:26.994 TEST_HEADER include/spdk/opal_spec.h 00:03:26.994 TEST_HEADER include/spdk/pci_ids.h 00:03:26.994 TEST_HEADER include/spdk/pipe.h 00:03:26.994 TEST_HEADER include/spdk/queue.h 00:03:26.994 TEST_HEADER include/spdk/reduce.h 00:03:26.994 TEST_HEADER include/spdk/rpc.h 00:03:26.994 TEST_HEADER include/spdk/scheduler.h 00:03:26.994 TEST_HEADER include/spdk/scsi.h 00:03:26.994 TEST_HEADER include/spdk/scsi_spec.h 00:03:26.994 TEST_HEADER include/spdk/sock.h 00:03:26.994 TEST_HEADER include/spdk/stdinc.h 00:03:26.994 TEST_HEADER include/spdk/string.h 00:03:26.994 TEST_HEADER include/spdk/thread.h 00:03:26.994 TEST_HEADER include/spdk/trace.h 00:03:26.994 TEST_HEADER include/spdk/trace_parser.h 00:03:26.994 TEST_HEADER include/spdk/tree.h 00:03:26.994 TEST_HEADER include/spdk/ublk.h 00:03:26.994 TEST_HEADER include/spdk/util.h 00:03:26.994 TEST_HEADER include/spdk/uuid.h 00:03:26.994 TEST_HEADER include/spdk/version.h 00:03:26.994 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:26.994 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:26.994 TEST_HEADER include/spdk/vhost.h 00:03:26.994 TEST_HEADER include/spdk/vmd.h 00:03:26.994 TEST_HEADER include/spdk/xor.h 00:03:26.994 TEST_HEADER include/spdk/zipf.h 00:03:26.994 CXX test/cpp_headers/accel.o 00:03:26.994 LINK bdev_svc 00:03:26.994 LINK event_perf 00:03:27.251 LINK mkfs 00:03:27.251 CXX test/cpp_headers/accel_module.o 00:03:27.251 CC test/event/reactor/reactor.o 00:03:27.251 LINK spdk_trace 00:03:27.509 CXX test/cpp_headers/assert.o 00:03:27.509 LINK dif 00:03:27.509 LINK accel_perf 00:03:27.509 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:27.509 LINK test_dma 00:03:27.509 CXX test/cpp_headers/barrier.o 00:03:27.509 LINK bdevio 00:03:27.509 LINK reactor 00:03:27.766 CC app/trace_record/trace_record.o 00:03:27.766 CXX test/cpp_headers/base64.o 00:03:27.766 CC app/nvmf_tgt/nvmf_main.o 00:03:27.766 CC examples/bdev/hello_world/hello_bdev.o 00:03:27.766 LINK mem_callbacks 00:03:27.766 CC test/event/reactor_perf/reactor_perf.o 00:03:28.024 CC app/iscsi_tgt/iscsi_tgt.o 00:03:28.024 CC examples/bdev/bdevperf/bdevperf.o 00:03:28.024 CC test/app/histogram_perf/histogram_perf.o 00:03:28.024 LINK spdk_trace_record 00:03:28.024 LINK reactor_perf 00:03:28.024 CXX test/cpp_headers/bdev.o 00:03:28.024 CC test/env/vtophys/vtophys.o 00:03:28.024 LINK nvme_fuzz 00:03:28.024 LINK nvmf_tgt 00:03:28.282 LINK iscsi_tgt 00:03:28.282 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:28.282 LINK hello_bdev 00:03:28.282 CC test/event/app_repeat/app_repeat.o 00:03:28.282 LINK histogram_perf 00:03:28.282 CXX test/cpp_headers/bdev_module.o 00:03:28.282 LINK vtophys 00:03:28.282 CXX test/cpp_headers/bdev_zone.o 00:03:28.282 LINK env_dpdk_post_init 00:03:28.540 LINK app_repeat 00:03:28.540 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:28.540 CXX test/cpp_headers/bit_array.o 00:03:28.540 CC app/spdk_tgt/spdk_tgt.o 00:03:28.540 CC app/spdk_lspci/spdk_lspci.o 00:03:28.540 CC app/spdk_nvme_perf/perf.o 00:03:28.540 CC app/spdk_nvme_identify/identify.o 00:03:28.797 CC test/env/memory/memory_ut.o 00:03:28.797 CXX test/cpp_headers/bit_pool.o 00:03:28.797 CC test/event/scheduler/scheduler.o 00:03:28.797 LINK spdk_lspci 00:03:28.797 CC test/lvol/esnap/esnap.o 00:03:28.797 LINK spdk_tgt 00:03:29.055 LINK bdevperf 00:03:29.055 CXX test/cpp_headers/blob_bdev.o 00:03:29.055 LINK scheduler 00:03:29.055 CC test/nvme/aer/aer.o 00:03:29.313 CXX test/cpp_headers/blobfs_bdev.o 00:03:29.313 CC examples/ioat/perf/perf.o 00:03:29.313 CC examples/blob/hello_world/hello_blob.o 00:03:29.313 CC examples/nvme/hello_world/hello_world.o 00:03:29.313 CXX test/cpp_headers/blobfs.o 00:03:29.570 LINK aer 00:03:29.570 LINK spdk_nvme_perf 00:03:29.570 LINK spdk_nvme_identify 00:03:29.570 LINK ioat_perf 00:03:29.570 CXX test/cpp_headers/blob.o 00:03:29.570 LINK hello_world 00:03:29.570 LINK hello_blob 00:03:29.827 CC test/nvme/reset/reset.o 00:03:29.827 CXX test/cpp_headers/conf.o 00:03:29.827 LINK memory_ut 00:03:29.827 CC examples/ioat/verify/verify.o 00:03:29.827 CC app/spdk_nvme_discover/discovery_aer.o 00:03:29.827 CC test/rpc_client/rpc_client_test.o 00:03:29.827 CC examples/nvme/reconnect/reconnect.o 00:03:30.085 CXX test/cpp_headers/config.o 00:03:30.085 LINK reset 00:03:30.085 CXX test/cpp_headers/cpuset.o 00:03:30.085 CC examples/blob/cli/blobcli.o 00:03:30.085 LINK verify 00:03:30.085 LINK rpc_client_test 00:03:30.085 CC test/env/pci/pci_ut.o 00:03:30.085 LINK spdk_nvme_discover 00:03:30.342 CXX test/cpp_headers/crc16.o 00:03:30.342 LINK reconnect 00:03:30.342 CC test/app/jsoncat/jsoncat.o 00:03:30.342 CC test/nvme/sgl/sgl.o 00:03:30.342 CC test/thread/poller_perf/poller_perf.o 00:03:30.600 CC app/spdk_top/spdk_top.o 00:03:30.600 LINK jsoncat 00:03:30.600 LINK pci_ut 00:03:30.600 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:30.600 LINK poller_perf 00:03:30.600 CXX test/cpp_headers/crc32.o 00:03:30.600 LINK blobcli 00:03:30.600 LINK sgl 00:03:30.600 LINK iscsi_fuzz 00:03:30.857 CXX test/cpp_headers/crc64.o 00:03:30.857 CC app/vhost/vhost.o 00:03:30.857 CC examples/nvme/arbitration/arbitration.o 00:03:31.115 CC test/nvme/e2edp/nvme_dp.o 00:03:31.115 CC app/spdk_dd/spdk_dd.o 00:03:31.115 CXX test/cpp_headers/dif.o 00:03:31.115 CC app/fio/nvme/fio_plugin.o 00:03:31.115 LINK nvme_manage 00:03:31.115 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:31.373 LINK vhost 00:03:31.373 CXX test/cpp_headers/dma.o 00:03:31.373 LINK nvme_dp 00:03:31.373 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:31.373 CC app/fio/bdev/fio_plugin.o 00:03:31.373 CXX test/cpp_headers/endian.o 00:03:31.373 LINK spdk_top 00:03:31.630 LINK arbitration 00:03:31.630 LINK spdk_dd 00:03:31.630 CC examples/nvme/hotplug/hotplug.o 00:03:31.630 CXX test/cpp_headers/env_dpdk.o 00:03:31.631 CC test/nvme/overhead/overhead.o 00:03:31.631 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:31.888 LINK spdk_nvme 00:03:31.888 CC test/app/stub/stub.o 00:03:31.888 CXX test/cpp_headers/env.o 00:03:31.888 CC examples/nvme/abort/abort.o 00:03:32.145 LINK hotplug 00:03:32.145 LINK stub 00:03:32.145 CXX test/cpp_headers/event.o 00:03:32.145 LINK cmb_copy 00:03:32.145 LINK overhead 00:03:32.145 LINK vhost_fuzz 00:03:32.145 CC examples/sock/hello_world/hello_sock.o 00:03:32.145 LINK spdk_bdev 00:03:32.145 CXX test/cpp_headers/fd_group.o 00:03:32.402 LINK abort 00:03:32.402 CXX test/cpp_headers/fd.o 00:03:32.402 CXX test/cpp_headers/file.o 00:03:32.402 CC test/nvme/err_injection/err_injection.o 00:03:32.402 CC test/nvme/startup/startup.o 00:03:32.402 CC test/nvme/reserve/reserve.o 00:03:32.402 CC examples/vmd/lsvmd/lsvmd.o 00:03:32.402 LINK hello_sock 00:03:32.402 CC test/nvme/simple_copy/simple_copy.o 00:03:32.664 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:32.664 CXX test/cpp_headers/ftl.o 00:03:32.664 LINK lsvmd 00:03:32.664 LINK startup 00:03:32.664 CC test/nvme/connect_stress/connect_stress.o 00:03:32.664 LINK err_injection 00:03:32.921 LINK reserve 00:03:32.921 LINK pmr_persistence 00:03:32.921 LINK simple_copy 00:03:32.921 CC examples/nvmf/nvmf/nvmf.o 00:03:32.921 CC examples/vmd/led/led.o 00:03:32.921 CXX test/cpp_headers/gpt_spec.o 00:03:32.921 CXX test/cpp_headers/hexlify.o 00:03:33.179 LINK connect_stress 00:03:33.179 CC examples/util/zipf/zipf.o 00:03:33.179 LINK led 00:03:33.179 CC test/nvme/boot_partition/boot_partition.o 00:03:33.179 CC test/nvme/compliance/nvme_compliance.o 00:03:33.437 CXX test/cpp_headers/histogram_data.o 00:03:33.437 CC test/nvme/fused_ordering/fused_ordering.o 00:03:33.437 CC examples/thread/thread/thread_ex.o 00:03:33.437 LINK nvmf 00:03:33.437 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:33.437 LINK zipf 00:03:33.437 LINK boot_partition 00:03:33.437 CC examples/idxd/perf/perf.o 00:03:33.694 CXX test/cpp_headers/idxd.o 00:03:33.694 LINK doorbell_aers 00:03:33.694 CXX test/cpp_headers/idxd_spec.o 00:03:33.694 LINK fused_ordering 00:03:33.694 LINK esnap 00:03:33.694 LINK thread 00:03:33.952 LINK nvme_compliance 00:03:33.952 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:33.952 CXX test/cpp_headers/init.o 00:03:33.952 CC test/nvme/fdp/fdp.o 00:03:33.952 CXX test/cpp_headers/ioat.o 00:03:34.209 CC test/nvme/cuse/cuse.o 00:03:34.209 LINK idxd_perf 00:03:34.209 CXX test/cpp_headers/ioat_spec.o 00:03:34.209 CXX test/cpp_headers/iscsi_spec.o 00:03:34.209 LINK interrupt_tgt 00:03:34.209 CXX test/cpp_headers/json.o 00:03:34.209 CXX test/cpp_headers/jsonrpc.o 00:03:34.467 CXX test/cpp_headers/likely.o 00:03:34.467 CXX test/cpp_headers/log.o 00:03:34.467 CXX test/cpp_headers/lvol.o 00:03:34.467 CXX test/cpp_headers/memory.o 00:03:34.467 LINK fdp 00:03:34.467 CXX test/cpp_headers/mmio.o 00:03:34.467 CXX test/cpp_headers/nbd.o 00:03:34.467 CXX test/cpp_headers/notify.o 00:03:34.724 CXX test/cpp_headers/nvme.o 00:03:34.724 CXX test/cpp_headers/nvme_intel.o 00:03:34.724 CXX test/cpp_headers/nvme_ocssd.o 00:03:34.724 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:34.724 CXX test/cpp_headers/nvme_spec.o 00:03:34.724 CXX test/cpp_headers/nvme_zns.o 00:03:34.724 CXX test/cpp_headers/nvmf_cmd.o 00:03:34.724 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:34.982 CXX test/cpp_headers/nvmf.o 00:03:34.982 CXX test/cpp_headers/nvmf_spec.o 00:03:34.982 CXX test/cpp_headers/nvmf_transport.o 00:03:34.982 CXX test/cpp_headers/opal.o 00:03:34.982 CXX test/cpp_headers/opal_spec.o 00:03:34.982 CXX test/cpp_headers/pci_ids.o 00:03:34.982 CXX test/cpp_headers/pipe.o 00:03:34.982 CXX test/cpp_headers/queue.o 00:03:35.240 CXX test/cpp_headers/reduce.o 00:03:35.240 CXX test/cpp_headers/rpc.o 00:03:35.240 CXX test/cpp_headers/scheduler.o 00:03:35.240 CXX test/cpp_headers/scsi.o 00:03:35.240 CXX test/cpp_headers/scsi_spec.o 00:03:35.240 CXX test/cpp_headers/sock.o 00:03:35.240 CXX test/cpp_headers/stdinc.o 00:03:35.240 LINK cuse 00:03:35.240 CXX test/cpp_headers/string.o 00:03:35.240 CXX test/cpp_headers/thread.o 00:03:35.240 CXX test/cpp_headers/trace.o 00:03:35.497 CXX test/cpp_headers/trace_parser.o 00:03:35.497 CXX test/cpp_headers/tree.o 00:03:35.497 CXX test/cpp_headers/ublk.o 00:03:35.497 CXX test/cpp_headers/util.o 00:03:35.497 CXX test/cpp_headers/uuid.o 00:03:35.497 CXX test/cpp_headers/version.o 00:03:35.497 CXX test/cpp_headers/vfio_user_pci.o 00:03:35.497 CXX test/cpp_headers/vfio_user_spec.o 00:03:35.497 CXX test/cpp_headers/vhost.o 00:03:35.497 CXX test/cpp_headers/vmd.o 00:03:35.754 CXX test/cpp_headers/xor.o 00:03:35.754 CXX test/cpp_headers/zipf.o 00:03:42.306 00:03:42.306 real 1m21.311s 00:03:42.306 user 9m1.121s 00:03:42.306 sys 1m47.044s 00:03:42.306 02:00:56 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:42.306 02:00:56 -- common/autotest_common.sh@10 -- $ set +x 00:03:42.306 ************************************ 00:03:42.306 END TEST make 00:03:42.306 ************************************ 00:03:42.306 02:00:56 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:42.306 02:00:56 -- nvmf/common.sh@7 -- # uname -s 00:03:42.306 02:00:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:42.306 02:00:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:42.306 02:00:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:42.306 02:00:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:42.306 02:00:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:42.306 02:00:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:42.306 02:00:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:42.306 02:00:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:42.306 02:00:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:42.306 02:00:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:42.306 02:00:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:03:42.306 02:00:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:03:42.306 02:00:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:42.306 02:00:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:42.306 02:00:56 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:03:42.306 02:00:56 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:42.306 02:00:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:42.306 02:00:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:42.306 02:00:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:42.306 02:00:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:42.306 02:00:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:42.306 02:00:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:42.306 02:00:56 -- paths/export.sh@5 -- # export PATH 00:03:42.306 02:00:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:42.306 02:00:56 -- nvmf/common.sh@46 -- # : 0 00:03:42.306 02:00:56 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:03:42.306 02:00:56 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:03:42.306 02:00:56 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:03:42.306 02:00:56 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:42.306 02:00:56 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:42.306 02:00:56 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:03:42.306 02:00:56 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:03:42.306 02:00:56 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:03:42.306 02:00:56 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:42.306 02:00:56 -- spdk/autotest.sh@32 -- # uname -s 00:03:42.306 02:00:56 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:42.306 02:00:56 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:42.306 02:00:56 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:42.306 02:00:56 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:42.306 02:00:56 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:42.306 02:00:56 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:42.306 02:00:56 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:42.306 02:00:56 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:42.306 02:00:56 -- spdk/autotest.sh@48 -- # udevadm_pid=49661 00:03:42.306 02:00:56 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:03:42.306 02:00:56 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:42.306 02:00:56 -- spdk/autotest.sh@54 -- # echo 49687 00:03:42.306 02:00:56 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:42.306 02:00:56 -- spdk/autotest.sh@56 -- # echo 49692 00:03:42.306 02:00:56 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:42.306 02:00:56 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:03:42.306 02:00:56 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:42.306 02:00:56 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:03:42.306 02:00:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:42.306 02:00:56 -- common/autotest_common.sh@10 -- # set +x 00:03:42.306 02:00:56 -- spdk/autotest.sh@70 -- # create_test_list 00:03:42.306 02:00:56 -- common/autotest_common.sh@736 -- # xtrace_disable 00:03:42.306 02:00:56 -- common/autotest_common.sh@10 -- # set +x 00:03:42.306 02:00:56 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:42.306 02:00:56 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:42.306 02:00:56 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:03:42.306 02:00:56 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:42.306 02:00:56 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:03:42.306 02:00:56 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:03:42.306 02:00:56 -- common/autotest_common.sh@1440 -- # uname 00:03:42.306 02:00:56 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:03:42.306 02:00:56 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:03:42.306 02:00:56 -- common/autotest_common.sh@1460 -- # uname 00:03:42.306 02:00:56 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:03:42.306 02:00:56 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:03:42.306 02:00:56 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:03:42.306 02:00:56 -- spdk/autotest.sh@83 -- # hash lcov 00:03:42.306 02:00:56 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:42.306 02:00:56 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:03:42.306 --rc lcov_branch_coverage=1 00:03:42.306 --rc lcov_function_coverage=1 00:03:42.306 --rc genhtml_branch_coverage=1 00:03:42.306 --rc genhtml_function_coverage=1 00:03:42.306 --rc genhtml_legend=1 00:03:42.306 --rc geninfo_all_blocks=1 00:03:42.306 ' 00:03:42.306 02:00:56 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:03:42.306 --rc lcov_branch_coverage=1 00:03:42.306 --rc lcov_function_coverage=1 00:03:42.306 --rc genhtml_branch_coverage=1 00:03:42.306 --rc genhtml_function_coverage=1 00:03:42.306 --rc genhtml_legend=1 00:03:42.306 --rc geninfo_all_blocks=1 00:03:42.306 ' 00:03:42.306 02:00:56 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:03:42.306 --rc lcov_branch_coverage=1 00:03:42.306 --rc lcov_function_coverage=1 00:03:42.306 --rc genhtml_branch_coverage=1 00:03:42.306 --rc genhtml_function_coverage=1 00:03:42.306 --rc genhtml_legend=1 00:03:42.306 --rc geninfo_all_blocks=1 00:03:42.306 --no-external' 00:03:42.306 02:00:56 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:03:42.306 --rc lcov_branch_coverage=1 00:03:42.306 --rc lcov_function_coverage=1 00:03:42.306 --rc genhtml_branch_coverage=1 00:03:42.306 --rc genhtml_function_coverage=1 00:03:42.306 --rc genhtml_legend=1 00:03:42.306 --rc geninfo_all_blocks=1 00:03:42.306 --no-external' 00:03:42.306 02:00:56 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:42.306 lcov: LCOV version 1.14 00:03:42.306 02:00:56 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:52.271 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:03:52.271 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:03:52.271 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:03:52.271 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:03:52.271 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:03:52.271 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:04:14.206 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:14.206 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:04:14.206 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:14.206 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:04:14.206 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:14.206 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:04:14.206 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:14.206 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:04:14.206 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:14.206 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:04:14.206 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:14.206 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:04:14.206 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:14.206 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:04:14.206 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:14.206 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:04:14.206 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:14.206 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:04:14.206 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:14.206 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:04:14.206 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:14.206 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:04:14.206 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:14.206 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:14.206 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:14.206 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:04:14.206 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:14.206 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:04:14.206 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:14.206 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:04:14.206 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:04:14.206 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:04:14.206 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:14.206 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:04:14.206 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:14.206 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:04:14.206 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:14.206 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:04:14.206 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:14.206 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:04:14.206 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:14.206 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:04:14.206 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:14.206 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:04:14.206 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:14.206 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:04:14.206 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:14.206 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:04:14.206 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:04:14.206 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:04:14.206 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:04:14.206 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:04:14.206 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:14.206 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:04:14.206 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:14.206 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:04:14.206 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:04:14.206 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:04:14.206 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:14.206 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:04:14.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:14.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:04:14.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:14.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:04:14.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:14.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:04:14.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:14.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:04:14.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:14.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:04:14.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:04:14.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:04:14.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:14.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:04:14.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:14.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:04:14.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:14.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:14.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:04:14.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:04:14.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:14.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:04:14.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:14.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:04:14.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:04:14.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:04:14.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:14.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:04:14.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:14.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:04:14.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:14.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:04:14.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:14.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:04:14.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:14.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:04:14.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:14.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:04:14.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:14.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:04:14.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:14.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:14.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:14.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:14.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:14.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:04:14.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:14.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:04:14.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:14.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:14.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:14.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:14.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:14.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:04:14.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:14.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:14.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:14.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:14.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:14.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:04:14.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:14.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:04:14.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:14.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:04:14.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:14.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:04:14.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:14.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:04:14.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:14.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:04:14.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:14.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:04:14.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:14.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:04:14.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:14.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:04:14.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:14.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:04:14.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:14.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:04:14.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:14.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:04:14.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:04:14.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:04:14.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:14.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:04:14.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:14.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:04:14.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:14.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:04:14.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:14.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:04:14.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:14.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:04:14.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:04:14.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:04:14.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:14.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:04:14.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:04:14.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:04:14.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:14.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:14.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:14.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:14.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:14.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:04:14.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:14.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:04:14.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:14.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:04:14.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:14.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:04:15.140 02:01:29 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:04:15.140 02:01:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:15.140 02:01:29 -- common/autotest_common.sh@10 -- # set +x 00:04:15.140 02:01:29 -- spdk/autotest.sh@102 -- # rm -f 00:04:15.140 02:01:29 -- spdk/autotest.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:16.072 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:16.072 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:04:16.072 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:04:16.072 02:01:30 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:04:16.072 02:01:30 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:16.072 02:01:30 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:16.072 02:01:30 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:16.072 02:01:30 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:16.072 02:01:30 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:16.072 02:01:30 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:16.072 02:01:30 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:16.072 02:01:30 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:16.072 02:01:30 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:16.072 02:01:30 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:04:16.072 02:01:30 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:04:16.072 02:01:30 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:16.072 02:01:30 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:16.072 02:01:30 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:16.072 02:01:30 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:04:16.072 02:01:30 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:04:16.072 02:01:30 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:16.072 02:01:30 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:16.072 02:01:30 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:16.072 02:01:30 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:04:16.072 02:01:30 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:04:16.072 02:01:30 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:16.072 02:01:30 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:16.072 02:01:30 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:04:16.072 02:01:30 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme1n2 /dev/nvme1n3 00:04:16.072 02:01:30 -- spdk/autotest.sh@121 -- # grep -v p 00:04:16.072 02:01:30 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:16.072 02:01:30 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:04:16.072 02:01:30 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:04:16.072 02:01:30 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:04:16.072 02:01:30 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:16.072 No valid GPT data, bailing 00:04:16.072 02:01:30 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:16.072 02:01:30 -- scripts/common.sh@393 -- # pt= 00:04:16.072 02:01:30 -- scripts/common.sh@394 -- # return 1 00:04:16.072 02:01:30 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:16.072 1+0 records in 00:04:16.072 1+0 records out 00:04:16.072 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00334312 s, 314 MB/s 00:04:16.072 02:01:30 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:16.072 02:01:30 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:04:16.072 02:01:30 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n1 00:04:16.072 02:01:30 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:04:16.072 02:01:30 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:16.072 No valid GPT data, bailing 00:04:16.072 02:01:30 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:16.072 02:01:30 -- scripts/common.sh@393 -- # pt= 00:04:16.072 02:01:30 -- scripts/common.sh@394 -- # return 1 00:04:16.072 02:01:30 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:16.072 1+0 records in 00:04:16.072 1+0 records out 00:04:16.072 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00481055 s, 218 MB/s 00:04:16.072 02:01:30 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:16.072 02:01:30 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:04:16.072 02:01:30 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n2 00:04:16.072 02:01:30 -- scripts/common.sh@380 -- # local block=/dev/nvme1n2 pt 00:04:16.072 02:01:30 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:16.072 No valid GPT data, bailing 00:04:16.072 02:01:30 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:16.072 02:01:30 -- scripts/common.sh@393 -- # pt= 00:04:16.072 02:01:30 -- scripts/common.sh@394 -- # return 1 00:04:16.072 02:01:30 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:16.072 1+0 records in 00:04:16.072 1+0 records out 00:04:16.072 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00290042 s, 362 MB/s 00:04:16.072 02:01:30 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:16.072 02:01:30 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:04:16.072 02:01:30 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n3 00:04:16.072 02:01:30 -- scripts/common.sh@380 -- # local block=/dev/nvme1n3 pt 00:04:16.072 02:01:30 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:16.331 No valid GPT data, bailing 00:04:16.331 02:01:30 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:16.331 02:01:30 -- scripts/common.sh@393 -- # pt= 00:04:16.331 02:01:30 -- scripts/common.sh@394 -- # return 1 00:04:16.331 02:01:30 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:16.331 1+0 records in 00:04:16.331 1+0 records out 00:04:16.331 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00329051 s, 319 MB/s 00:04:16.331 02:01:30 -- spdk/autotest.sh@129 -- # sync 00:04:16.331 02:01:30 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:16.331 02:01:30 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:16.331 02:01:30 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:17.702 02:01:32 -- spdk/autotest.sh@135 -- # uname -s 00:04:17.702 02:01:32 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:04:17.702 02:01:32 -- spdk/autotest.sh@136 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:17.702 02:01:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:17.702 02:01:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:17.702 02:01:32 -- common/autotest_common.sh@10 -- # set +x 00:04:17.702 ************************************ 00:04:17.702 START TEST setup.sh 00:04:17.702 ************************************ 00:04:17.702 02:01:32 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:17.702 * Looking for test storage... 00:04:17.702 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:17.702 02:01:32 -- setup/test-setup.sh@10 -- # uname -s 00:04:17.702 02:01:32 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:17.702 02:01:32 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:17.702 02:01:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:17.702 02:01:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:17.702 02:01:32 -- common/autotest_common.sh@10 -- # set +x 00:04:17.702 ************************************ 00:04:17.702 START TEST acl 00:04:17.702 ************************************ 00:04:17.702 02:01:32 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:17.960 * Looking for test storage... 00:04:17.960 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:17.960 02:01:32 -- setup/acl.sh@10 -- # get_zoned_devs 00:04:17.960 02:01:32 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:17.960 02:01:32 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:17.960 02:01:32 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:17.960 02:01:32 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:17.960 02:01:32 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:17.960 02:01:32 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:17.960 02:01:32 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:17.960 02:01:32 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:17.960 02:01:32 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:17.960 02:01:32 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:04:17.960 02:01:32 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:04:17.960 02:01:32 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:17.960 02:01:32 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:17.960 02:01:32 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:17.960 02:01:32 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:04:17.960 02:01:32 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:04:17.960 02:01:32 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:17.960 02:01:32 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:17.960 02:01:32 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:17.960 02:01:32 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:04:17.960 02:01:32 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:04:17.960 02:01:32 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:17.960 02:01:32 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:17.960 02:01:32 -- setup/acl.sh@12 -- # devs=() 00:04:17.960 02:01:32 -- setup/acl.sh@12 -- # declare -a devs 00:04:17.960 02:01:32 -- setup/acl.sh@13 -- # drivers=() 00:04:17.960 02:01:32 -- setup/acl.sh@13 -- # declare -A drivers 00:04:17.960 02:01:32 -- setup/acl.sh@51 -- # setup reset 00:04:17.960 02:01:32 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:17.960 02:01:32 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:18.554 02:01:33 -- setup/acl.sh@52 -- # collect_setup_devs 00:04:18.554 02:01:33 -- setup/acl.sh@16 -- # local dev driver 00:04:18.554 02:01:33 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:18.554 02:01:33 -- setup/acl.sh@15 -- # setup output status 00:04:18.554 02:01:33 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:18.554 02:01:33 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:18.554 Hugepages 00:04:18.554 node hugesize free / total 00:04:18.555 02:01:33 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:18.555 02:01:33 -- setup/acl.sh@19 -- # continue 00:04:18.555 02:01:33 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:18.812 00:04:18.812 02:01:33 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:18.812 02:01:33 -- setup/acl.sh@19 -- # continue 00:04:18.812 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:18.812 02:01:33 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:18.812 02:01:33 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:18.812 02:01:33 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:18.812 02:01:33 -- setup/acl.sh@20 -- # continue 00:04:18.812 02:01:33 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:18.812 02:01:33 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:04:18.812 02:01:33 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:18.812 02:01:33 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:18.812 02:01:33 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:18.812 02:01:33 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:18.812 02:01:33 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:18.813 02:01:33 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:04:18.813 02:01:33 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:18.813 02:01:33 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:18.813 02:01:33 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:18.813 02:01:33 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:18.813 02:01:33 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:18.813 02:01:33 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:04:18.813 02:01:33 -- setup/acl.sh@54 -- # run_test denied denied 00:04:18.813 02:01:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:18.813 02:01:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:18.813 02:01:33 -- common/autotest_common.sh@10 -- # set +x 00:04:18.813 ************************************ 00:04:18.813 START TEST denied 00:04:18.813 ************************************ 00:04:18.813 02:01:33 -- common/autotest_common.sh@1104 -- # denied 00:04:18.813 02:01:33 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:04:18.813 02:01:33 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:04:18.813 02:01:33 -- setup/acl.sh@38 -- # setup output config 00:04:18.813 02:01:33 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:18.813 02:01:33 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:19.746 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:04:19.746 02:01:34 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:04:19.746 02:01:34 -- setup/acl.sh@28 -- # local dev driver 00:04:19.746 02:01:34 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:19.746 02:01:34 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:04:19.746 02:01:34 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:04:19.746 02:01:34 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:19.746 02:01:34 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:19.746 02:01:34 -- setup/acl.sh@41 -- # setup reset 00:04:19.746 02:01:34 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:19.746 02:01:34 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:20.310 00:04:20.310 real 0m1.333s 00:04:20.310 user 0m0.576s 00:04:20.310 sys 0m0.698s 00:04:20.310 02:01:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.310 02:01:34 -- common/autotest_common.sh@10 -- # set +x 00:04:20.310 ************************************ 00:04:20.310 END TEST denied 00:04:20.310 ************************************ 00:04:20.310 02:01:34 -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:20.310 02:01:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:20.310 02:01:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:20.310 02:01:34 -- common/autotest_common.sh@10 -- # set +x 00:04:20.311 ************************************ 00:04:20.311 START TEST allowed 00:04:20.311 ************************************ 00:04:20.311 02:01:34 -- common/autotest_common.sh@1104 -- # allowed 00:04:20.311 02:01:34 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:04:20.311 02:01:34 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:04:20.311 02:01:34 -- setup/acl.sh@45 -- # setup output config 00:04:20.311 02:01:34 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:20.311 02:01:34 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:21.245 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:21.245 02:01:35 -- setup/acl.sh@47 -- # verify 0000:00:07.0 00:04:21.245 02:01:35 -- setup/acl.sh@28 -- # local dev driver 00:04:21.245 02:01:35 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:21.245 02:01:35 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:04:21.245 02:01:35 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:04:21.245 02:01:35 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:21.245 02:01:35 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:21.245 02:01:35 -- setup/acl.sh@48 -- # setup reset 00:04:21.245 02:01:35 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:21.245 02:01:35 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:21.812 00:04:21.812 real 0m1.444s 00:04:21.812 user 0m0.667s 00:04:21.812 sys 0m0.772s 00:04:21.812 02:01:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:21.812 ************************************ 00:04:21.812 END TEST allowed 00:04:21.812 ************************************ 00:04:21.812 02:01:36 -- common/autotest_common.sh@10 -- # set +x 00:04:21.812 ************************************ 00:04:21.812 END TEST acl 00:04:21.812 ************************************ 00:04:21.812 00:04:21.812 real 0m3.926s 00:04:21.812 user 0m1.773s 00:04:21.812 sys 0m2.116s 00:04:21.812 02:01:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:21.812 02:01:36 -- common/autotest_common.sh@10 -- # set +x 00:04:21.813 02:01:36 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:21.813 02:01:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:21.813 02:01:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:21.813 02:01:36 -- common/autotest_common.sh@10 -- # set +x 00:04:21.813 ************************************ 00:04:21.813 START TEST hugepages 00:04:21.813 ************************************ 00:04:21.813 02:01:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:21.813 * Looking for test storage... 00:04:21.813 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:21.813 02:01:36 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:21.813 02:01:36 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:21.813 02:01:36 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:21.813 02:01:36 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:21.813 02:01:36 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:21.813 02:01:36 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:21.813 02:01:36 -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:21.813 02:01:36 -- setup/common.sh@18 -- # local node= 00:04:21.813 02:01:36 -- setup/common.sh@19 -- # local var val 00:04:21.813 02:01:36 -- setup/common.sh@20 -- # local mem_f mem 00:04:21.813 02:01:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.813 02:01:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.813 02:01:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.813 02:01:36 -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.813 02:01:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.813 02:01:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.813 02:01:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 5469812 kB' 'MemAvailable: 7402476 kB' 'Buffers: 2436 kB' 'Cached: 2142316 kB' 'SwapCached: 0 kB' 'Active: 872416 kB' 'Inactive: 1375044 kB' 'Active(anon): 113196 kB' 'Inactive(anon): 0 kB' 'Active(file): 759220 kB' 'Inactive(file): 1375044 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 104444 kB' 'Mapped: 48768 kB' 'Shmem: 10488 kB' 'KReclaimable: 70676 kB' 'Slab: 145188 kB' 'SReclaimable: 70676 kB' 'SUnreclaim: 74512 kB' 'KernelStack: 6440 kB' 'PageTables: 4496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 325664 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:21.813 02:01:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.813 02:01:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.813 02:01:36 -- setup/common.sh@32 -- # continue 00:04:21.813 02:01:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.813 02:01:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.813 02:01:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.813 02:01:36 -- setup/common.sh@32 -- # continue 00:04:21.813 02:01:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.813 02:01:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.813 02:01:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.813 02:01:36 -- setup/common.sh@32 -- # continue 00:04:21.813 02:01:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.813 02:01:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.813 02:01:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.813 02:01:36 -- setup/common.sh@32 -- # continue 00:04:21.813 02:01:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.813 02:01:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.813 02:01:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.813 02:01:36 -- setup/common.sh@32 -- # continue 00:04:21.813 02:01:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.813 02:01:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.813 02:01:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.813 02:01:36 -- setup/common.sh@32 -- # continue 00:04:21.813 02:01:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.813 02:01:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.813 02:01:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.813 02:01:36 -- setup/common.sh@32 -- # continue 00:04:21.813 02:01:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.813 02:01:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.813 02:01:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.813 02:01:36 -- setup/common.sh@32 -- # continue 00:04:21.813 02:01:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.813 02:01:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.813 02:01:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.813 02:01:36 -- setup/common.sh@32 -- # continue 00:04:21.813 02:01:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.813 02:01:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.813 02:01:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.813 02:01:36 -- setup/common.sh@32 -- # continue 00:04:21.813 02:01:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.813 02:01:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.813 02:01:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.813 02:01:36 -- setup/common.sh@32 -- # continue 00:04:21.813 02:01:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.813 02:01:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.813 02:01:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.813 02:01:36 -- setup/common.sh@32 -- # continue 00:04:21.813 02:01:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.813 02:01:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.813 02:01:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.813 02:01:36 -- setup/common.sh@32 -- # continue 00:04:21.813 02:01:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.813 02:01:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.813 02:01:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.813 02:01:36 -- setup/common.sh@32 -- # continue 00:04:21.813 02:01:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.813 02:01:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.813 02:01:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.813 02:01:36 -- setup/common.sh@32 -- # continue 00:04:21.813 02:01:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.813 02:01:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.813 02:01:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.813 02:01:36 -- setup/common.sh@32 -- # continue 00:04:21.813 02:01:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.813 02:01:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.813 02:01:36 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.813 02:01:36 -- setup/common.sh@32 -- # continue 00:04:21.813 02:01:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.813 02:01:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.813 02:01:36 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.813 02:01:36 -- setup/common.sh@32 -- # continue 00:04:21.813 02:01:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.813 02:01:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.813 02:01:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.813 02:01:36 -- setup/common.sh@32 -- # continue 00:04:21.813 02:01:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.813 02:01:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.813 02:01:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.813 02:01:36 -- setup/common.sh@32 -- # continue 00:04:21.813 02:01:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.813 02:01:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.813 02:01:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.813 02:01:36 -- setup/common.sh@32 -- # continue 00:04:21.813 02:01:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.813 02:01:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.813 02:01:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.813 02:01:36 -- setup/common.sh@32 -- # continue 00:04:21.813 02:01:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.813 02:01:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.813 02:01:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.813 02:01:36 -- setup/common.sh@32 -- # continue 00:04:21.813 02:01:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.813 02:01:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.813 02:01:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.813 02:01:36 -- setup/common.sh@32 -- # continue 00:04:21.813 02:01:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.813 02:01:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.813 02:01:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.813 02:01:36 -- setup/common.sh@32 -- # continue 00:04:21.813 02:01:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.813 02:01:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.813 02:01:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.813 02:01:36 -- setup/common.sh@32 -- # continue 00:04:21.813 02:01:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.813 02:01:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.813 02:01:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.813 02:01:36 -- setup/common.sh@32 -- # continue 00:04:21.813 02:01:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.813 02:01:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.813 02:01:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.813 02:01:36 -- setup/common.sh@32 -- # continue 00:04:21.813 02:01:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.813 02:01:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.813 02:01:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.813 02:01:36 -- setup/common.sh@32 -- # continue 00:04:21.813 02:01:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.813 02:01:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.814 02:01:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.814 02:01:36 -- setup/common.sh@32 -- # continue 00:04:21.814 02:01:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.814 02:01:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.814 02:01:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.814 02:01:36 -- setup/common.sh@32 -- # continue 00:04:21.814 02:01:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.814 02:01:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.814 02:01:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.814 02:01:36 -- setup/common.sh@32 -- # continue 00:04:21.814 02:01:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.814 02:01:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.814 02:01:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.814 02:01:36 -- setup/common.sh@32 -- # continue 00:04:21.814 02:01:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.814 02:01:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.814 02:01:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.814 02:01:36 -- setup/common.sh@32 -- # continue 00:04:21.814 02:01:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.814 02:01:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.814 02:01:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.814 02:01:36 -- setup/common.sh@32 -- # continue 00:04:21.814 02:01:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.814 02:01:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.814 02:01:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.814 02:01:36 -- setup/common.sh@32 -- # continue 00:04:21.814 02:01:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.814 02:01:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.814 02:01:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.814 02:01:36 -- setup/common.sh@32 -- # continue 00:04:21.814 02:01:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.814 02:01:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.814 02:01:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.814 02:01:36 -- setup/common.sh@32 -- # continue 00:04:21.814 02:01:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.814 02:01:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.814 02:01:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.814 02:01:36 -- setup/common.sh@32 -- # continue 00:04:21.814 02:01:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.814 02:01:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.814 02:01:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.814 02:01:36 -- setup/common.sh@32 -- # continue 00:04:21.814 02:01:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.814 02:01:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.814 02:01:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.814 02:01:36 -- setup/common.sh@32 -- # continue 00:04:21.814 02:01:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.814 02:01:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.814 02:01:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.814 02:01:36 -- setup/common.sh@32 -- # continue 00:04:21.814 02:01:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.814 02:01:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.814 02:01:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.814 02:01:36 -- setup/common.sh@32 -- # continue 00:04:21.814 02:01:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.814 02:01:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.814 02:01:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.814 02:01:36 -- setup/common.sh@32 -- # continue 00:04:21.814 02:01:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.814 02:01:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.814 02:01:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.814 02:01:36 -- setup/common.sh@32 -- # continue 00:04:21.814 02:01:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.814 02:01:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.814 02:01:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.814 02:01:36 -- setup/common.sh@32 -- # continue 00:04:21.814 02:01:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.814 02:01:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.814 02:01:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.814 02:01:36 -- setup/common.sh@32 -- # continue 00:04:21.814 02:01:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.814 02:01:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.814 02:01:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.814 02:01:36 -- setup/common.sh@32 -- # continue 00:04:21.814 02:01:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.814 02:01:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.814 02:01:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.814 02:01:36 -- setup/common.sh@32 -- # continue 00:04:21.814 02:01:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.814 02:01:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.814 02:01:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.814 02:01:36 -- setup/common.sh@32 -- # continue 00:04:21.814 02:01:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.814 02:01:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.814 02:01:36 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.814 02:01:36 -- setup/common.sh@32 -- # continue 00:04:21.814 02:01:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.814 02:01:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.814 02:01:36 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.814 02:01:36 -- setup/common.sh@32 -- # continue 00:04:21.814 02:01:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.814 02:01:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.814 02:01:36 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.814 02:01:36 -- setup/common.sh@33 -- # echo 2048 00:04:21.814 02:01:36 -- setup/common.sh@33 -- # return 0 00:04:21.814 02:01:36 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:21.814 02:01:36 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:21.814 02:01:36 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:21.814 02:01:36 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:21.814 02:01:36 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:21.814 02:01:36 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:21.814 02:01:36 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:21.814 02:01:36 -- setup/hugepages.sh@207 -- # get_nodes 00:04:21.814 02:01:36 -- setup/hugepages.sh@27 -- # local node 00:04:21.814 02:01:36 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:21.814 02:01:36 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:21.814 02:01:36 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:21.814 02:01:36 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:21.814 02:01:36 -- setup/hugepages.sh@208 -- # clear_hp 00:04:21.814 02:01:36 -- setup/hugepages.sh@37 -- # local node hp 00:04:21.814 02:01:36 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:21.814 02:01:36 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:21.814 02:01:36 -- setup/hugepages.sh@41 -- # echo 0 00:04:21.814 02:01:36 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:21.814 02:01:36 -- setup/hugepages.sh@41 -- # echo 0 00:04:21.814 02:01:36 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:21.814 02:01:36 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:21.814 02:01:36 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:21.814 02:01:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:21.814 02:01:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:21.814 02:01:36 -- common/autotest_common.sh@10 -- # set +x 00:04:21.814 ************************************ 00:04:21.814 START TEST default_setup 00:04:21.814 ************************************ 00:04:21.814 02:01:36 -- common/autotest_common.sh@1104 -- # default_setup 00:04:21.814 02:01:36 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:21.814 02:01:36 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:21.814 02:01:36 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:21.814 02:01:36 -- setup/hugepages.sh@51 -- # shift 00:04:21.814 02:01:36 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:21.814 02:01:36 -- setup/hugepages.sh@52 -- # local node_ids 00:04:21.814 02:01:36 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:21.814 02:01:36 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:21.814 02:01:36 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:21.814 02:01:36 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:21.814 02:01:36 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:21.814 02:01:36 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:21.814 02:01:36 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:21.814 02:01:36 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:21.814 02:01:36 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:21.815 02:01:36 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:21.815 02:01:36 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:21.815 02:01:36 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:21.815 02:01:36 -- setup/hugepages.sh@73 -- # return 0 00:04:21.815 02:01:36 -- setup/hugepages.sh@137 -- # setup output 00:04:21.815 02:01:36 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:21.815 02:01:36 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:22.381 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:22.641 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:22.641 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:22.641 02:01:37 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:22.641 02:01:37 -- setup/hugepages.sh@89 -- # local node 00:04:22.641 02:01:37 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:22.641 02:01:37 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:22.641 02:01:37 -- setup/hugepages.sh@92 -- # local surp 00:04:22.641 02:01:37 -- setup/hugepages.sh@93 -- # local resv 00:04:22.641 02:01:37 -- setup/hugepages.sh@94 -- # local anon 00:04:22.641 02:01:37 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:22.641 02:01:37 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:22.641 02:01:37 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:22.641 02:01:37 -- setup/common.sh@18 -- # local node= 00:04:22.641 02:01:37 -- setup/common.sh@19 -- # local var val 00:04:22.641 02:01:37 -- setup/common.sh@20 -- # local mem_f mem 00:04:22.641 02:01:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.641 02:01:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.641 02:01:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.641 02:01:37 -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.641 02:01:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.641 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.641 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.641 02:01:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7561424 kB' 'MemAvailable: 9493924 kB' 'Buffers: 2436 kB' 'Cached: 2142312 kB' 'SwapCached: 0 kB' 'Active: 888956 kB' 'Inactive: 1375064 kB' 'Active(anon): 129736 kB' 'Inactive(anon): 0 kB' 'Active(file): 759220 kB' 'Inactive(file): 1375064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 120928 kB' 'Mapped: 48968 kB' 'Shmem: 10464 kB' 'KReclaimable: 70312 kB' 'Slab: 144800 kB' 'SReclaimable: 70312 kB' 'SUnreclaim: 74488 kB' 'KernelStack: 6400 kB' 'PageTables: 4468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 341692 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:22.641 02:01:37 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.641 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.641 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.641 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.641 02:01:37 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.641 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.641 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.641 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.641 02:01:37 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.641 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.641 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.641 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.641 02:01:37 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.641 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.641 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.641 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.641 02:01:37 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.641 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.641 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.641 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.641 02:01:37 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.641 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.641 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.641 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.641 02:01:37 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.641 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.641 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.641 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.641 02:01:37 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.641 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.641 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.641 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.641 02:01:37 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.641 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.641 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.641 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.641 02:01:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.641 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.641 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.641 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.641 02:01:37 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.641 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.642 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.642 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.642 02:01:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.642 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.642 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.642 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.642 02:01:37 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.642 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.642 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.642 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.642 02:01:37 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.642 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.642 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.642 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.642 02:01:37 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.642 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.642 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.642 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.642 02:01:37 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.642 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.642 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.642 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.642 02:01:37 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.642 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.642 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.642 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.642 02:01:37 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.642 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.642 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.642 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.642 02:01:37 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.642 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.642 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.642 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.642 02:01:37 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.642 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.642 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.642 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.642 02:01:37 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.642 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.642 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.642 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.642 02:01:37 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.642 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.642 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.642 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.642 02:01:37 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.642 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.642 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.642 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.642 02:01:37 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.642 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.642 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.642 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.642 02:01:37 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.642 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.642 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.642 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.642 02:01:37 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.642 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.642 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.642 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.642 02:01:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.642 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.642 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.642 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.642 02:01:37 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.642 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.642 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.642 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.642 02:01:37 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.642 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.642 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.642 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.642 02:01:37 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.642 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.642 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.642 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.642 02:01:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.642 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.642 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.642 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.642 02:01:37 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.642 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.642 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.642 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.642 02:01:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.642 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.642 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.642 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.642 02:01:37 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.642 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.642 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.642 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.642 02:01:37 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.642 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.642 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.642 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.642 02:01:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.642 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.642 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.642 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.642 02:01:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.642 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.642 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.642 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.642 02:01:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.642 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.642 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.642 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.642 02:01:37 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.642 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.642 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.642 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.642 02:01:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.642 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.642 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.642 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.642 02:01:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.642 02:01:37 -- setup/common.sh@33 -- # echo 0 00:04:22.642 02:01:37 -- setup/common.sh@33 -- # return 0 00:04:22.642 02:01:37 -- setup/hugepages.sh@97 -- # anon=0 00:04:22.642 02:01:37 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:22.642 02:01:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:22.642 02:01:37 -- setup/common.sh@18 -- # local node= 00:04:22.642 02:01:37 -- setup/common.sh@19 -- # local var val 00:04:22.642 02:01:37 -- setup/common.sh@20 -- # local mem_f mem 00:04:22.642 02:01:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.642 02:01:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.642 02:01:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.642 02:01:37 -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.642 02:01:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.642 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.642 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.643 02:01:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7560924 kB' 'MemAvailable: 9493424 kB' 'Buffers: 2436 kB' 'Cached: 2142312 kB' 'SwapCached: 0 kB' 'Active: 889164 kB' 'Inactive: 1375064 kB' 'Active(anon): 129944 kB' 'Inactive(anon): 0 kB' 'Active(file): 759220 kB' 'Inactive(file): 1375064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 120828 kB' 'Mapped: 49168 kB' 'Shmem: 10464 kB' 'KReclaimable: 70312 kB' 'Slab: 144796 kB' 'SReclaimable: 70312 kB' 'SUnreclaim: 74484 kB' 'KernelStack: 6384 kB' 'PageTables: 4388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 344404 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:22.643 02:01:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.643 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.643 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.643 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.643 02:01:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.643 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.643 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.643 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.643 02:01:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.643 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.643 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.643 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.643 02:01:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.643 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.643 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.643 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.643 02:01:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.643 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.643 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.643 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.643 02:01:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.643 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.643 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.643 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.643 02:01:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.643 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.643 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.643 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.643 02:01:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.643 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.643 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.643 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.643 02:01:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.643 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.643 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.643 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.643 02:01:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.643 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.643 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.643 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.643 02:01:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.643 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.643 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.643 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.643 02:01:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.643 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.643 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.643 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.643 02:01:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.643 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.643 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.643 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.643 02:01:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.643 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.643 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.643 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.643 02:01:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.643 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.643 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.643 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.643 02:01:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.643 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.643 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.643 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.643 02:01:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.643 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.643 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.643 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.643 02:01:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.643 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.643 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.643 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.643 02:01:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.643 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.643 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.643 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.643 02:01:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.643 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.643 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.643 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.643 02:01:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.643 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.643 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.643 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.643 02:01:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.643 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.643 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.643 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.643 02:01:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.643 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.643 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.643 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.643 02:01:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.643 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.643 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.643 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.643 02:01:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.643 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.643 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.643 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.643 02:01:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.643 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.643 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.643 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.643 02:01:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.643 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.643 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.643 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.643 02:01:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.643 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.643 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.643 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.643 02:01:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.643 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.643 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.643 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.643 02:01:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.643 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.643 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.643 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.643 02:01:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.643 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.643 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.643 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.643 02:01:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.643 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.643 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.643 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.643 02:01:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.643 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.643 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.643 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.643 02:01:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.643 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.643 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.643 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.643 02:01:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.643 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.643 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.643 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.643 02:01:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.643 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.644 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.644 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.644 02:01:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.644 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.644 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.644 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.644 02:01:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.644 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.644 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.644 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.644 02:01:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.644 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.644 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.644 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.644 02:01:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.644 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.644 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.644 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.644 02:01:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.644 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.644 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.644 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.644 02:01:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.644 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.644 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.644 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.644 02:01:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.644 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.644 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.644 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.644 02:01:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.644 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.644 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.644 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.644 02:01:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.644 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.644 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.644 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.644 02:01:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.644 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.644 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.644 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.644 02:01:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.644 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.644 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.644 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.644 02:01:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.644 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.644 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.644 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.644 02:01:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.644 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.644 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.644 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.644 02:01:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.644 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.644 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.644 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.644 02:01:37 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.644 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.644 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.644 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.644 02:01:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.644 02:01:37 -- setup/common.sh@33 -- # echo 0 00:04:22.644 02:01:37 -- setup/common.sh@33 -- # return 0 00:04:22.644 02:01:37 -- setup/hugepages.sh@99 -- # surp=0 00:04:22.644 02:01:37 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:22.644 02:01:37 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:22.644 02:01:37 -- setup/common.sh@18 -- # local node= 00:04:22.644 02:01:37 -- setup/common.sh@19 -- # local var val 00:04:22.644 02:01:37 -- setup/common.sh@20 -- # local mem_f mem 00:04:22.644 02:01:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.644 02:01:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.644 02:01:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.644 02:01:37 -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.644 02:01:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.644 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.644 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.644 02:01:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7560424 kB' 'MemAvailable: 9492924 kB' 'Buffers: 2436 kB' 'Cached: 2142312 kB' 'SwapCached: 0 kB' 'Active: 888548 kB' 'Inactive: 1375064 kB' 'Active(anon): 129328 kB' 'Inactive(anon): 0 kB' 'Active(file): 759220 kB' 'Inactive(file): 1375064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 120452 kB' 'Mapped: 48908 kB' 'Shmem: 10464 kB' 'KReclaimable: 70312 kB' 'Slab: 144792 kB' 'SReclaimable: 70312 kB' 'SUnreclaim: 74480 kB' 'KernelStack: 6368 kB' 'PageTables: 4344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 341692 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:22.644 02:01:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.644 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.644 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.644 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.644 02:01:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.644 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.644 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.644 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.644 02:01:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.644 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.644 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.644 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.644 02:01:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.644 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.644 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.644 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.644 02:01:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.644 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.644 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.644 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.644 02:01:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.644 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.644 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.644 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.644 02:01:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.644 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.644 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.644 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.644 02:01:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.644 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.644 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.644 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.644 02:01:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.644 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.644 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.644 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.644 02:01:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.644 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.644 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.644 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.644 02:01:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.644 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.644 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.644 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.644 02:01:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.644 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.644 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.644 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.644 02:01:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.645 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.645 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.645 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.645 02:01:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.645 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.645 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.645 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.645 02:01:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.645 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.645 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.645 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.645 02:01:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.645 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.645 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.645 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.645 02:01:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.645 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.645 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.645 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.645 02:01:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.645 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.645 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.645 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.905 02:01:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.905 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.905 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.905 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.905 02:01:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.905 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.905 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.905 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.905 02:01:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.905 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.905 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.905 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.905 02:01:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.905 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.905 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.905 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.905 02:01:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.905 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.905 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.905 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.905 02:01:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.905 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.905 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.905 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.905 02:01:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.905 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.905 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.905 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.905 02:01:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.905 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.905 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.905 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.905 02:01:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.905 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.905 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.905 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.905 02:01:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.905 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.905 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.905 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.905 02:01:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.905 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.905 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.905 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.905 02:01:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.905 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.905 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.905 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.905 02:01:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.905 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.905 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.905 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.905 02:01:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.905 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.905 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.905 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.905 02:01:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.905 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.905 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.905 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.905 02:01:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.905 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.905 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.905 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.905 02:01:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.905 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.905 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.905 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.905 02:01:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.905 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.905 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.905 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.905 02:01:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.905 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.905 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.905 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.905 02:01:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.905 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.905 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.905 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.905 02:01:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.905 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.905 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.905 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.905 02:01:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.905 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.905 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.905 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.905 02:01:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.905 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.905 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.905 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.905 02:01:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.905 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.905 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.905 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.905 02:01:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.905 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.905 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.905 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.905 02:01:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.905 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.905 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.905 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.905 02:01:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.905 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.905 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.905 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.905 02:01:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.905 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.905 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.905 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.905 02:01:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.905 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.905 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.905 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.905 02:01:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.905 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.906 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.906 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.906 02:01:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.906 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.906 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.906 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.906 02:01:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.906 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.906 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.906 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.906 02:01:37 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.906 02:01:37 -- setup/common.sh@33 -- # echo 0 00:04:22.906 02:01:37 -- setup/common.sh@33 -- # return 0 00:04:22.906 02:01:37 -- setup/hugepages.sh@100 -- # resv=0 00:04:22.906 nr_hugepages=1024 00:04:22.906 02:01:37 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:22.906 resv_hugepages=0 00:04:22.906 02:01:37 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:22.906 surplus_hugepages=0 00:04:22.906 02:01:37 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:22.906 anon_hugepages=0 00:04:22.906 02:01:37 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:22.906 02:01:37 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:22.906 02:01:37 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:22.906 02:01:37 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:22.906 02:01:37 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:22.906 02:01:37 -- setup/common.sh@18 -- # local node= 00:04:22.906 02:01:37 -- setup/common.sh@19 -- # local var val 00:04:22.906 02:01:37 -- setup/common.sh@20 -- # local mem_f mem 00:04:22.906 02:01:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.906 02:01:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.906 02:01:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.906 02:01:37 -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.906 02:01:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.906 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.906 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.906 02:01:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7560676 kB' 'MemAvailable: 9493176 kB' 'Buffers: 2436 kB' 'Cached: 2142312 kB' 'SwapCached: 0 kB' 'Active: 888296 kB' 'Inactive: 1375064 kB' 'Active(anon): 129076 kB' 'Inactive(anon): 0 kB' 'Active(file): 759220 kB' 'Inactive(file): 1375064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 120216 kB' 'Mapped: 48784 kB' 'Shmem: 10464 kB' 'KReclaimable: 70312 kB' 'Slab: 144784 kB' 'SReclaimable: 70312 kB' 'SUnreclaim: 74472 kB' 'KernelStack: 6368 kB' 'PageTables: 4368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 341692 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:22.906 02:01:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.906 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.906 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.906 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.906 02:01:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.906 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.906 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.906 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.906 02:01:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.906 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.906 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.906 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.906 02:01:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.906 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.906 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.906 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.906 02:01:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.906 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.906 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.906 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.906 02:01:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.906 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.906 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.906 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.906 02:01:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.906 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.906 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.906 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.906 02:01:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.906 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.906 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.906 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.906 02:01:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.906 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.906 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.906 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.906 02:01:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.906 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.906 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.906 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.906 02:01:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.906 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.906 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.906 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.906 02:01:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.906 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.906 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.906 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.906 02:01:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.906 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.906 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.906 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.906 02:01:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.906 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.906 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.906 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.906 02:01:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.906 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.906 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.906 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.906 02:01:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.906 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.906 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.906 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.906 02:01:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.906 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.906 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.906 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.906 02:01:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.906 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.906 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.906 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.906 02:01:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.906 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.906 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.906 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.906 02:01:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.906 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.906 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.906 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.906 02:01:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.906 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.906 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.906 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.906 02:01:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.906 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.906 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.906 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.906 02:01:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.906 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.906 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.906 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.906 02:01:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.907 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.907 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.907 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.907 02:01:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.907 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.907 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.907 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.907 02:01:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.907 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.907 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.907 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.907 02:01:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.907 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.907 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.907 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.907 02:01:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.907 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.907 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.907 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.907 02:01:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.907 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.907 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.907 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.907 02:01:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.907 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.907 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.907 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.907 02:01:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.907 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.907 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.907 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.907 02:01:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.907 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.907 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.907 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.907 02:01:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.907 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.907 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.907 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.907 02:01:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.907 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.907 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.907 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.907 02:01:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.907 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.907 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.907 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.907 02:01:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.907 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.907 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.907 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.907 02:01:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.907 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.907 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.907 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.907 02:01:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.907 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.907 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.907 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.907 02:01:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.907 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.907 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.907 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.907 02:01:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.907 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.907 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.907 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.907 02:01:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.907 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.907 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.907 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.907 02:01:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.907 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.907 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.907 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.907 02:01:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.907 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.907 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.907 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.907 02:01:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.907 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.907 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.907 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.907 02:01:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.907 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.907 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.907 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.907 02:01:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.907 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.907 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.907 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.907 02:01:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.907 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.907 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.907 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.907 02:01:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.907 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.907 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.907 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.907 02:01:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.907 02:01:37 -- setup/common.sh@33 -- # echo 1024 00:04:22.907 02:01:37 -- setup/common.sh@33 -- # return 0 00:04:22.907 02:01:37 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:22.907 02:01:37 -- setup/hugepages.sh@112 -- # get_nodes 00:04:22.907 02:01:37 -- setup/hugepages.sh@27 -- # local node 00:04:22.907 02:01:37 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:22.907 02:01:37 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:22.907 02:01:37 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:22.907 02:01:37 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:22.907 02:01:37 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:22.907 02:01:37 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:22.907 02:01:37 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:22.907 02:01:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:22.907 02:01:37 -- setup/common.sh@18 -- # local node=0 00:04:22.907 02:01:37 -- setup/common.sh@19 -- # local var val 00:04:22.907 02:01:37 -- setup/common.sh@20 -- # local mem_f mem 00:04:22.907 02:01:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.907 02:01:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:22.907 02:01:37 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:22.907 02:01:37 -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.907 02:01:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.907 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.907 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.907 02:01:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7560676 kB' 'MemUsed: 4681300 kB' 'SwapCached: 0 kB' 'Active: 888296 kB' 'Inactive: 1375064 kB' 'Active(anon): 129076 kB' 'Inactive(anon): 0 kB' 'Active(file): 759220 kB' 'Inactive(file): 1375064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'FilePages: 2144748 kB' 'Mapped: 48784 kB' 'AnonPages: 120204 kB' 'Shmem: 10464 kB' 'KernelStack: 6368 kB' 'PageTables: 4368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70312 kB' 'Slab: 144776 kB' 'SReclaimable: 70312 kB' 'SUnreclaim: 74464 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:22.907 02:01:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.907 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.907 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.907 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.907 02:01:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.907 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.907 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.907 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.908 02:01:37 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.908 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.908 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.908 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.908 02:01:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.908 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.908 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.908 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.908 02:01:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.908 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.908 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.908 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.908 02:01:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.908 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.908 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.908 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.908 02:01:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.908 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.908 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.908 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.908 02:01:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.908 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.908 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.908 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.908 02:01:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.908 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.908 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.908 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.908 02:01:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.908 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.908 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.908 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.908 02:01:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.908 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.908 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.908 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.908 02:01:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.908 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.908 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.908 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.908 02:01:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.908 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.908 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.908 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.908 02:01:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.908 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.908 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.908 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.908 02:01:37 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.908 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.908 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.908 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.908 02:01:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.908 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.908 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.908 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.908 02:01:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.908 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.908 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.908 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.908 02:01:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.908 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.908 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.908 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.908 02:01:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.908 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.908 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.908 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.908 02:01:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.908 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.908 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.908 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.908 02:01:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.908 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.908 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.908 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.908 02:01:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.908 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.908 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.908 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.908 02:01:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.908 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.908 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.908 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.908 02:01:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.908 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.908 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.908 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.908 02:01:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.908 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.908 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.908 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.908 02:01:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.908 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.908 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.908 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.908 02:01:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.908 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.908 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.908 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.908 02:01:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.908 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.908 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.908 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.908 02:01:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.908 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.908 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.908 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.908 02:01:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.908 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.908 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.908 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.908 02:01:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.908 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.908 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.908 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.908 02:01:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.908 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.908 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.908 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.908 02:01:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.908 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.908 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.908 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.908 02:01:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.908 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.908 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.908 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.908 02:01:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.908 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.908 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.908 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.908 02:01:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.908 02:01:37 -- setup/common.sh@32 -- # continue 00:04:22.908 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.908 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.908 02:01:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.908 02:01:37 -- setup/common.sh@33 -- # echo 0 00:04:22.908 02:01:37 -- setup/common.sh@33 -- # return 0 00:04:22.908 02:01:37 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:22.909 02:01:37 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:22.909 02:01:37 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:22.909 02:01:37 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:22.909 node0=1024 expecting 1024 00:04:22.909 02:01:37 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:22.909 02:01:37 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:22.909 00:04:22.909 real 0m0.920s 00:04:22.909 user 0m0.431s 00:04:22.909 sys 0m0.441s 00:04:22.909 02:01:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:22.909 02:01:37 -- common/autotest_common.sh@10 -- # set +x 00:04:22.909 ************************************ 00:04:22.909 END TEST default_setup 00:04:22.909 ************************************ 00:04:22.909 02:01:37 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:22.909 02:01:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:22.909 02:01:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:22.909 02:01:37 -- common/autotest_common.sh@10 -- # set +x 00:04:22.909 ************************************ 00:04:22.909 START TEST per_node_1G_alloc 00:04:22.909 ************************************ 00:04:22.909 02:01:37 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:04:22.909 02:01:37 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:22.909 02:01:37 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:22.909 02:01:37 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:22.909 02:01:37 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:22.909 02:01:37 -- setup/hugepages.sh@51 -- # shift 00:04:22.909 02:01:37 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:22.909 02:01:37 -- setup/hugepages.sh@52 -- # local node_ids 00:04:22.909 02:01:37 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:22.909 02:01:37 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:22.909 02:01:37 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:22.909 02:01:37 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:22.909 02:01:37 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:22.909 02:01:37 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:22.909 02:01:37 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:22.909 02:01:37 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:22.909 02:01:37 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:22.909 02:01:37 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:22.909 02:01:37 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:22.909 02:01:37 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:22.909 02:01:37 -- setup/hugepages.sh@73 -- # return 0 00:04:22.909 02:01:37 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:22.909 02:01:37 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:22.909 02:01:37 -- setup/hugepages.sh@146 -- # setup output 00:04:22.909 02:01:37 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:22.909 02:01:37 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:23.170 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:23.170 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:23.170 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:23.170 02:01:37 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:23.170 02:01:37 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:23.170 02:01:37 -- setup/hugepages.sh@89 -- # local node 00:04:23.170 02:01:37 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:23.170 02:01:37 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:23.170 02:01:37 -- setup/hugepages.sh@92 -- # local surp 00:04:23.170 02:01:37 -- setup/hugepages.sh@93 -- # local resv 00:04:23.170 02:01:37 -- setup/hugepages.sh@94 -- # local anon 00:04:23.170 02:01:37 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:23.170 02:01:37 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:23.170 02:01:37 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:23.170 02:01:37 -- setup/common.sh@18 -- # local node= 00:04:23.170 02:01:37 -- setup/common.sh@19 -- # local var val 00:04:23.170 02:01:37 -- setup/common.sh@20 -- # local mem_f mem 00:04:23.170 02:01:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.170 02:01:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.170 02:01:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.170 02:01:37 -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.170 02:01:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.170 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.170 02:01:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8612120 kB' 'MemAvailable: 10544620 kB' 'Buffers: 2436 kB' 'Cached: 2142312 kB' 'SwapCached: 0 kB' 'Active: 888712 kB' 'Inactive: 1375064 kB' 'Active(anon): 129492 kB' 'Inactive(anon): 0 kB' 'Active(file): 759220 kB' 'Inactive(file): 1375064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 120596 kB' 'Mapped: 48972 kB' 'Shmem: 10464 kB' 'KReclaimable: 70312 kB' 'Slab: 144796 kB' 'SReclaimable: 70312 kB' 'SUnreclaim: 74484 kB' 'KernelStack: 6324 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 341692 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:23.170 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.170 02:01:37 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.170 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.170 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.170 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.170 02:01:37 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.170 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.170 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.170 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.170 02:01:37 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.170 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.170 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.170 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.170 02:01:37 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.170 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.170 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.170 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.170 02:01:37 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.170 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.170 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.170 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.170 02:01:37 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.170 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.170 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.170 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.170 02:01:37 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.170 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.170 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.170 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.170 02:01:37 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.170 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.170 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.170 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.170 02:01:37 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.170 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.170 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.170 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.170 02:01:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.170 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.170 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.170 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.170 02:01:37 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.170 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.170 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.170 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.170 02:01:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.170 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.170 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.170 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.170 02:01:37 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.170 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.170 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.170 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.170 02:01:37 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.170 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.170 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.170 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.170 02:01:37 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.170 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.170 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.170 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.170 02:01:37 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.170 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.170 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.170 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.170 02:01:37 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.170 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.170 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.170 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.171 02:01:37 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.171 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.171 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.171 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.171 02:01:37 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.171 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.171 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.171 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.171 02:01:37 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.171 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.171 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.171 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.171 02:01:37 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.171 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.171 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.171 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.171 02:01:37 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.171 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.171 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.171 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.171 02:01:37 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.171 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.171 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.171 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.171 02:01:37 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.171 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.171 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.171 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.171 02:01:37 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.171 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.171 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.171 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.171 02:01:37 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.171 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.171 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.171 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.171 02:01:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.171 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.171 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.171 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.171 02:01:37 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.171 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.171 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.171 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.171 02:01:37 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.171 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.171 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.171 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.171 02:01:37 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.171 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.171 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.171 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.171 02:01:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.171 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.171 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.171 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.171 02:01:37 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.171 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.171 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.171 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.171 02:01:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.171 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.171 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.171 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.171 02:01:37 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.171 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.171 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.171 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.171 02:01:37 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.171 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.171 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.171 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.171 02:01:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.171 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.171 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.171 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.171 02:01:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.171 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.171 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.171 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.171 02:01:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.171 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.171 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.171 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.171 02:01:37 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.171 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.171 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.171 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.171 02:01:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.171 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.171 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.171 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.171 02:01:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.171 02:01:37 -- setup/common.sh@33 -- # echo 0 00:04:23.171 02:01:37 -- setup/common.sh@33 -- # return 0 00:04:23.171 02:01:37 -- setup/hugepages.sh@97 -- # anon=0 00:04:23.171 02:01:37 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:23.171 02:01:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:23.171 02:01:37 -- setup/common.sh@18 -- # local node= 00:04:23.171 02:01:37 -- setup/common.sh@19 -- # local var val 00:04:23.171 02:01:37 -- setup/common.sh@20 -- # local mem_f mem 00:04:23.171 02:01:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.171 02:01:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.171 02:01:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.171 02:01:37 -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.171 02:01:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.171 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.171 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.171 02:01:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8612424 kB' 'MemAvailable: 10544924 kB' 'Buffers: 2436 kB' 'Cached: 2142312 kB' 'SwapCached: 0 kB' 'Active: 888556 kB' 'Inactive: 1375064 kB' 'Active(anon): 129336 kB' 'Inactive(anon): 0 kB' 'Active(file): 759220 kB' 'Inactive(file): 1375064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 120484 kB' 'Mapped: 48784 kB' 'Shmem: 10464 kB' 'KReclaimable: 70312 kB' 'Slab: 144812 kB' 'SReclaimable: 70312 kB' 'SUnreclaim: 74500 kB' 'KernelStack: 6368 kB' 'PageTables: 4368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 341692 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:23.171 02:01:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.171 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.171 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.171 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.171 02:01:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.171 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.171 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.171 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.171 02:01:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.171 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.171 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.171 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.171 02:01:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.171 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.171 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.171 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.171 02:01:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.171 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.172 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.172 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.172 02:01:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.172 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.172 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.172 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.172 02:01:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.172 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.172 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.172 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.172 02:01:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.172 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.172 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.172 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.172 02:01:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.172 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.172 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.172 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.172 02:01:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.172 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.172 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.172 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.172 02:01:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.172 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.172 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.172 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.172 02:01:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.172 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.172 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.172 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.172 02:01:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.172 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.172 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.172 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.172 02:01:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.172 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.172 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.172 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.172 02:01:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.172 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.172 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.172 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.172 02:01:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.172 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.172 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.172 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.172 02:01:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.172 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.172 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.172 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.172 02:01:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.172 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.172 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.172 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.172 02:01:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.172 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.172 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.172 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.172 02:01:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.172 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.172 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.172 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.172 02:01:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.172 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.172 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.172 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.172 02:01:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.172 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.172 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.172 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.172 02:01:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.172 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.172 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.172 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.172 02:01:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.172 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.172 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.172 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.172 02:01:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.172 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.172 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.172 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.172 02:01:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.172 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.172 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.172 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.172 02:01:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.172 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.172 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.172 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.172 02:01:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.172 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.172 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.172 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.172 02:01:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.172 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.172 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.172 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.172 02:01:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.172 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.172 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.172 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.172 02:01:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.172 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.172 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.172 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.172 02:01:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.172 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.172 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.172 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.172 02:01:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.172 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.172 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.172 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.172 02:01:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.172 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.172 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.172 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.172 02:01:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.172 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.172 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.172 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.172 02:01:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.172 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.172 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.172 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.172 02:01:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.172 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.172 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.172 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.172 02:01:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.172 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.172 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.172 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.172 02:01:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.172 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.172 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.172 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.172 02:01:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.172 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.172 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.172 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.172 02:01:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.172 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.173 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.173 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.173 02:01:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.173 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.173 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.173 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.173 02:01:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.173 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.173 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.173 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.173 02:01:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.173 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.173 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.173 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.173 02:01:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.173 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.173 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.173 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.173 02:01:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.173 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.173 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.173 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.173 02:01:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.173 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.173 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.173 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.173 02:01:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.173 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.173 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.173 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.173 02:01:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.173 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.173 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.173 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.173 02:01:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.173 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.173 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.173 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.173 02:01:37 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.173 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.173 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.173 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.173 02:01:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.173 02:01:37 -- setup/common.sh@33 -- # echo 0 00:04:23.173 02:01:37 -- setup/common.sh@33 -- # return 0 00:04:23.173 02:01:37 -- setup/hugepages.sh@99 -- # surp=0 00:04:23.173 02:01:37 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:23.173 02:01:37 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:23.173 02:01:37 -- setup/common.sh@18 -- # local node= 00:04:23.173 02:01:37 -- setup/common.sh@19 -- # local var val 00:04:23.173 02:01:37 -- setup/common.sh@20 -- # local mem_f mem 00:04:23.173 02:01:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.173 02:01:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.173 02:01:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.173 02:01:37 -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.173 02:01:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.173 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.173 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.173 02:01:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8612424 kB' 'MemAvailable: 10544924 kB' 'Buffers: 2436 kB' 'Cached: 2142312 kB' 'SwapCached: 0 kB' 'Active: 888288 kB' 'Inactive: 1375064 kB' 'Active(anon): 129068 kB' 'Inactive(anon): 0 kB' 'Active(file): 759220 kB' 'Inactive(file): 1375064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 120188 kB' 'Mapped: 48784 kB' 'Shmem: 10464 kB' 'KReclaimable: 70312 kB' 'Slab: 144812 kB' 'SReclaimable: 70312 kB' 'SUnreclaim: 74500 kB' 'KernelStack: 6384 kB' 'PageTables: 4420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 341324 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:23.173 02:01:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.173 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.173 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.173 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.173 02:01:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.173 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.173 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.173 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.173 02:01:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.173 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.173 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.173 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.173 02:01:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.173 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.173 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.173 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.173 02:01:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.173 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.173 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.173 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.173 02:01:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.173 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.173 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.173 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.173 02:01:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.173 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.173 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.173 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.173 02:01:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.173 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.173 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.173 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.173 02:01:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.173 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.173 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.173 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.173 02:01:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.173 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.173 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.173 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.173 02:01:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.173 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.173 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.173 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.173 02:01:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.173 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.173 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.173 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.173 02:01:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.173 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.173 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.173 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.173 02:01:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.173 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.173 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.173 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.173 02:01:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.173 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.173 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.173 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.173 02:01:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.173 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.173 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.173 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.173 02:01:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.173 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.173 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.173 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.174 02:01:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.174 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.174 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.174 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.174 02:01:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.174 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.174 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.174 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.174 02:01:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.174 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.174 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.174 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.174 02:01:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.174 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.174 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.174 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.174 02:01:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.174 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.174 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.174 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.174 02:01:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.174 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.174 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.174 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.174 02:01:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.174 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.174 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.174 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.174 02:01:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.174 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.174 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.174 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.174 02:01:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.174 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.174 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.174 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.174 02:01:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.174 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.174 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.174 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.174 02:01:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.174 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.174 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.174 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.174 02:01:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.174 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.174 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.174 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.174 02:01:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.174 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.174 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.174 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.174 02:01:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.174 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.174 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.174 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.174 02:01:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.174 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.174 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.174 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.174 02:01:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.174 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.174 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.174 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.174 02:01:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.174 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.174 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.174 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.174 02:01:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.174 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.174 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.174 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.174 02:01:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.174 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.174 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.174 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.174 02:01:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.174 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.174 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.174 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.174 02:01:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.174 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.174 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.174 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.174 02:01:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.174 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.174 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.174 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.174 02:01:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.174 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.174 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.174 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.174 02:01:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.174 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.174 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.174 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.174 02:01:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.174 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.174 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.174 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.174 02:01:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.174 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.174 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.174 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.174 02:01:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.174 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.174 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.174 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.174 02:01:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.174 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.174 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.174 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.174 02:01:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.174 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.174 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.174 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.174 02:01:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.174 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.174 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.174 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.174 02:01:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.174 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.174 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.174 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.174 02:01:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.174 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.174 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.174 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.174 02:01:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.174 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.174 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.174 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.174 02:01:37 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.174 02:01:37 -- setup/common.sh@33 -- # echo 0 00:04:23.174 02:01:37 -- setup/common.sh@33 -- # return 0 00:04:23.174 02:01:37 -- setup/hugepages.sh@100 -- # resv=0 00:04:23.174 nr_hugepages=512 00:04:23.174 02:01:37 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:23.174 resv_hugepages=0 00:04:23.174 02:01:37 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:23.174 surplus_hugepages=0 00:04:23.175 02:01:37 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:23.175 anon_hugepages=0 00:04:23.175 02:01:37 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:23.175 02:01:37 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:23.175 02:01:37 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:23.175 02:01:37 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:23.175 02:01:37 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:23.175 02:01:37 -- setup/common.sh@18 -- # local node= 00:04:23.175 02:01:37 -- setup/common.sh@19 -- # local var val 00:04:23.175 02:01:37 -- setup/common.sh@20 -- # local mem_f mem 00:04:23.175 02:01:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.175 02:01:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.175 02:01:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.175 02:01:37 -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.175 02:01:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.175 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.175 02:01:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8612676 kB' 'MemAvailable: 10545176 kB' 'Buffers: 2436 kB' 'Cached: 2142312 kB' 'SwapCached: 0 kB' 'Active: 888440 kB' 'Inactive: 1375064 kB' 'Active(anon): 129220 kB' 'Inactive(anon): 0 kB' 'Active(file): 759220 kB' 'Inactive(file): 1375064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 120364 kB' 'Mapped: 48784 kB' 'Shmem: 10464 kB' 'KReclaimable: 70312 kB' 'Slab: 144784 kB' 'SReclaimable: 70312 kB' 'SUnreclaim: 74472 kB' 'KernelStack: 6336 kB' 'PageTables: 4268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 341692 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:23.175 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.175 02:01:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.175 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.175 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.175 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.175 02:01:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.175 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.175 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.175 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.175 02:01:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.175 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.175 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.175 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.175 02:01:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.175 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.175 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.175 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.175 02:01:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.175 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.175 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.175 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.175 02:01:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.175 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.175 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.175 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.175 02:01:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.175 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.175 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.175 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.175 02:01:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.175 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.175 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.175 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.175 02:01:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.435 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.435 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.435 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.435 02:01:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.435 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.435 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.435 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.435 02:01:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.435 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.435 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.435 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.435 02:01:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.435 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.435 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.435 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.435 02:01:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.435 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.435 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.435 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.435 02:01:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.435 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.435 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.435 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.435 02:01:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.435 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.435 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.435 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.435 02:01:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.435 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.435 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.435 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.435 02:01:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.435 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.435 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.435 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.435 02:01:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.435 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.435 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.435 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.435 02:01:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.435 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.435 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.435 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.435 02:01:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.435 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.435 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.435 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.435 02:01:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.435 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.435 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.435 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.435 02:01:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.435 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.435 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.435 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.435 02:01:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.435 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.435 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.435 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.435 02:01:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.435 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.435 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.435 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.435 02:01:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.435 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.435 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.435 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.435 02:01:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.435 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.435 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.435 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.435 02:01:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.435 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.435 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.435 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.435 02:01:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.435 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.435 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.435 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.435 02:01:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.435 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.435 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.435 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.435 02:01:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.435 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.435 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.435 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.435 02:01:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.435 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.435 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.435 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.435 02:01:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.435 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.435 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.435 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.435 02:01:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.435 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.435 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.435 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.435 02:01:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.435 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.435 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.435 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.435 02:01:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.435 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.435 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.435 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.435 02:01:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.435 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.435 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.435 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.435 02:01:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.435 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.435 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.435 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.435 02:01:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.435 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.435 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.435 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.435 02:01:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.435 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.435 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.435 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.436 02:01:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.436 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.436 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.436 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.436 02:01:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.436 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.436 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.436 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.436 02:01:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.436 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.436 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.436 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.436 02:01:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.436 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.436 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.436 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.436 02:01:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.436 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.436 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.436 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.436 02:01:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.436 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.436 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.436 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.436 02:01:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.436 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.436 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.436 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.436 02:01:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.436 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.436 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.436 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.436 02:01:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.436 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.436 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.436 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.436 02:01:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.436 02:01:37 -- setup/common.sh@33 -- # echo 512 00:04:23.436 02:01:37 -- setup/common.sh@33 -- # return 0 00:04:23.436 02:01:37 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:23.436 02:01:37 -- setup/hugepages.sh@112 -- # get_nodes 00:04:23.436 02:01:37 -- setup/hugepages.sh@27 -- # local node 00:04:23.436 02:01:37 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:23.436 02:01:37 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:23.436 02:01:37 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:23.436 02:01:37 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:23.436 02:01:37 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:23.436 02:01:37 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:23.436 02:01:37 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:23.436 02:01:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:23.436 02:01:37 -- setup/common.sh@18 -- # local node=0 00:04:23.436 02:01:37 -- setup/common.sh@19 -- # local var val 00:04:23.436 02:01:37 -- setup/common.sh@20 -- # local mem_f mem 00:04:23.436 02:01:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.436 02:01:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:23.436 02:01:37 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:23.436 02:01:37 -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.436 02:01:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.436 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.436 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.436 02:01:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8612676 kB' 'MemUsed: 3629300 kB' 'SwapCached: 0 kB' 'Active: 888512 kB' 'Inactive: 1375064 kB' 'Active(anon): 129292 kB' 'Inactive(anon): 0 kB' 'Active(file): 759220 kB' 'Inactive(file): 1375064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'FilePages: 2144748 kB' 'Mapped: 48784 kB' 'AnonPages: 120420 kB' 'Shmem: 10464 kB' 'KernelStack: 6352 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70312 kB' 'Slab: 144788 kB' 'SReclaimable: 70312 kB' 'SUnreclaim: 74476 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:23.436 02:01:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.436 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.436 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.436 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.436 02:01:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.436 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.436 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.436 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.436 02:01:37 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.436 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.436 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.436 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.436 02:01:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.436 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.436 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.436 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.436 02:01:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.436 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.436 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.436 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.436 02:01:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.436 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.436 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.436 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.436 02:01:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.436 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.436 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.436 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.436 02:01:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.436 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.436 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.436 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.436 02:01:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.436 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.436 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.436 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.436 02:01:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.436 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.436 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.436 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.436 02:01:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.436 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.436 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.436 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.436 02:01:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.436 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.436 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.436 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.436 02:01:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.436 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.436 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.436 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.436 02:01:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.436 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.436 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.436 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.436 02:01:37 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.436 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.436 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.436 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.436 02:01:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.436 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.436 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.436 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.436 02:01:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.436 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.436 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.436 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.436 02:01:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.436 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.436 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.436 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.437 02:01:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.437 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.437 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.437 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.437 02:01:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.437 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.437 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.437 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.437 02:01:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.437 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.437 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.437 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.437 02:01:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.437 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.437 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.437 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.437 02:01:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.437 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.437 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.437 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.437 02:01:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.437 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.437 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.437 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.437 02:01:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.437 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.437 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.437 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.437 02:01:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.437 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.437 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.437 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.437 02:01:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.437 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.437 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.437 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.437 02:01:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.437 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.437 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.437 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.437 02:01:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.437 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.437 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.437 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.437 02:01:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.437 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.437 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.437 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.437 02:01:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.437 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.437 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.437 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.437 02:01:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.437 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.437 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.437 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.437 02:01:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.437 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.437 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.437 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.437 02:01:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.437 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.437 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.437 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.437 02:01:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.437 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.437 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.437 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.437 02:01:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.437 02:01:37 -- setup/common.sh@32 -- # continue 00:04:23.437 02:01:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.437 02:01:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.437 02:01:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.437 02:01:37 -- setup/common.sh@33 -- # echo 0 00:04:23.437 02:01:37 -- setup/common.sh@33 -- # return 0 00:04:23.437 02:01:37 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:23.437 02:01:37 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:23.437 02:01:37 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:23.437 02:01:37 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:23.437 node0=512 expecting 512 00:04:23.437 02:01:37 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:23.437 02:01:37 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:23.437 00:04:23.437 real 0m0.456s 00:04:23.437 user 0m0.245s 00:04:23.437 sys 0m0.240s 00:04:23.437 02:01:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:23.437 02:01:37 -- common/autotest_common.sh@10 -- # set +x 00:04:23.437 ************************************ 00:04:23.437 END TEST per_node_1G_alloc 00:04:23.437 ************************************ 00:04:23.437 02:01:37 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:23.437 02:01:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:23.437 02:01:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:23.437 02:01:37 -- common/autotest_common.sh@10 -- # set +x 00:04:23.437 ************************************ 00:04:23.437 START TEST even_2G_alloc 00:04:23.437 ************************************ 00:04:23.437 02:01:37 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:04:23.437 02:01:37 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:23.437 02:01:37 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:23.437 02:01:37 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:23.437 02:01:37 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:23.437 02:01:37 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:23.437 02:01:37 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:23.437 02:01:37 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:23.437 02:01:37 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:23.437 02:01:37 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:23.437 02:01:37 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:23.437 02:01:37 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:23.437 02:01:37 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:23.437 02:01:37 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:23.437 02:01:37 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:23.437 02:01:37 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:23.437 02:01:37 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:23.437 02:01:37 -- setup/hugepages.sh@83 -- # : 0 00:04:23.437 02:01:37 -- setup/hugepages.sh@84 -- # : 0 00:04:23.437 02:01:37 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:23.437 02:01:37 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:23.437 02:01:37 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:23.437 02:01:37 -- setup/hugepages.sh@153 -- # setup output 00:04:23.437 02:01:37 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:23.437 02:01:37 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:23.701 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:23.701 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:23.701 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:23.701 02:01:38 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:23.701 02:01:38 -- setup/hugepages.sh@89 -- # local node 00:04:23.701 02:01:38 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:23.701 02:01:38 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:23.701 02:01:38 -- setup/hugepages.sh@92 -- # local surp 00:04:23.701 02:01:38 -- setup/hugepages.sh@93 -- # local resv 00:04:23.701 02:01:38 -- setup/hugepages.sh@94 -- # local anon 00:04:23.701 02:01:38 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:23.701 02:01:38 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:23.701 02:01:38 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:23.701 02:01:38 -- setup/common.sh@18 -- # local node= 00:04:23.701 02:01:38 -- setup/common.sh@19 -- # local var val 00:04:23.701 02:01:38 -- setup/common.sh@20 -- # local mem_f mem 00:04:23.701 02:01:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.701 02:01:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.701 02:01:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.701 02:01:38 -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.701 02:01:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.701 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.701 02:01:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7563576 kB' 'MemAvailable: 9496076 kB' 'Buffers: 2436 kB' 'Cached: 2142312 kB' 'SwapCached: 0 kB' 'Active: 888716 kB' 'Inactive: 1375064 kB' 'Active(anon): 129496 kB' 'Inactive(anon): 0 kB' 'Active(file): 759220 kB' 'Inactive(file): 1375064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 120656 kB' 'Mapped: 48960 kB' 'Shmem: 10464 kB' 'KReclaimable: 70312 kB' 'Slab: 144812 kB' 'SReclaimable: 70312 kB' 'SUnreclaim: 74500 kB' 'KernelStack: 6376 kB' 'PageTables: 4300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 341692 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:23.701 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.701 02:01:38 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.701 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.701 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.701 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.701 02:01:38 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.701 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.701 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.701 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.701 02:01:38 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.701 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.701 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.701 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.701 02:01:38 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.701 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.701 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.701 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.701 02:01:38 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.701 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.701 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.701 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.701 02:01:38 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.701 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.701 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.701 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.701 02:01:38 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.701 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.701 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.701 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.701 02:01:38 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.701 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.701 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.701 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.701 02:01:38 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.701 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.701 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.701 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.701 02:01:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.701 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.701 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.701 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.701 02:01:38 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.701 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.701 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.701 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.701 02:01:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.701 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.701 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.701 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.701 02:01:38 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.701 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.701 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.701 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.701 02:01:38 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.701 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.702 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.702 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.702 02:01:38 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.702 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.702 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.702 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.702 02:01:38 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.702 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.702 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.702 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.702 02:01:38 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.702 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.702 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.702 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.702 02:01:38 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.702 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.702 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.702 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.702 02:01:38 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.702 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.702 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.702 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.702 02:01:38 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.702 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.702 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.702 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.702 02:01:38 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.702 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.702 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.702 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.702 02:01:38 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.702 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.702 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.702 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.702 02:01:38 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.702 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.702 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.702 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.702 02:01:38 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.702 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.702 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.702 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.702 02:01:38 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.702 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.702 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.702 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.702 02:01:38 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.702 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.702 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.702 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.702 02:01:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.702 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.702 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.702 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.702 02:01:38 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.702 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.702 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.702 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.702 02:01:38 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.702 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.702 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.702 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.702 02:01:38 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.702 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.702 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.702 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.702 02:01:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.702 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.702 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.702 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.702 02:01:38 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.702 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.702 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.702 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.702 02:01:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.702 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.702 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.702 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.702 02:01:38 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.702 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.702 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.702 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.702 02:01:38 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.702 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.702 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.702 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.702 02:01:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.702 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.702 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.702 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.702 02:01:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.702 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.702 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.702 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.702 02:01:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.702 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.702 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.702 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.702 02:01:38 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.702 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.702 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.702 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.702 02:01:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.702 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.702 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.702 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.702 02:01:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.702 02:01:38 -- setup/common.sh@33 -- # echo 0 00:04:23.702 02:01:38 -- setup/common.sh@33 -- # return 0 00:04:23.702 02:01:38 -- setup/hugepages.sh@97 -- # anon=0 00:04:23.702 02:01:38 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:23.702 02:01:38 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:23.702 02:01:38 -- setup/common.sh@18 -- # local node= 00:04:23.702 02:01:38 -- setup/common.sh@19 -- # local var val 00:04:23.702 02:01:38 -- setup/common.sh@20 -- # local mem_f mem 00:04:23.702 02:01:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.702 02:01:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.702 02:01:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.702 02:01:38 -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.702 02:01:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.702 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.703 02:01:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7563576 kB' 'MemAvailable: 9496076 kB' 'Buffers: 2436 kB' 'Cached: 2142312 kB' 'SwapCached: 0 kB' 'Active: 888344 kB' 'Inactive: 1375064 kB' 'Active(anon): 129124 kB' 'Inactive(anon): 0 kB' 'Active(file): 759220 kB' 'Inactive(file): 1375064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 120492 kB' 'Mapped: 48784 kB' 'Shmem: 10464 kB' 'KReclaimable: 70312 kB' 'Slab: 144804 kB' 'SReclaimable: 70312 kB' 'SUnreclaim: 74492 kB' 'KernelStack: 6352 kB' 'PageTables: 4316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 341692 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:23.703 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.703 02:01:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.703 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.703 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.703 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.703 02:01:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.703 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.703 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.703 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.703 02:01:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.703 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.703 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.703 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.703 02:01:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.703 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.703 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.703 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.703 02:01:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.703 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.703 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.703 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.703 02:01:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.703 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.703 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.703 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.703 02:01:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.703 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.703 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.703 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.703 02:01:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.703 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.703 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.703 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.703 02:01:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.703 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.703 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.703 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.703 02:01:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.703 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.703 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.703 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.703 02:01:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.703 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.703 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.703 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.703 02:01:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.703 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.703 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.703 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.703 02:01:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.703 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.703 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.703 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.703 02:01:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.703 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.703 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.703 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.703 02:01:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.703 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.703 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.703 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.703 02:01:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.703 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.703 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.703 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.703 02:01:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.703 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.703 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.703 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.703 02:01:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.703 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.703 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.703 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.703 02:01:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.703 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.703 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.703 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.703 02:01:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.703 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.703 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.703 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.703 02:01:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.703 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.703 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.703 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.703 02:01:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.703 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.703 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.703 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.703 02:01:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.703 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.703 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.703 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.703 02:01:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.703 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.703 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.703 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.703 02:01:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.703 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.703 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.703 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.703 02:01:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.703 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.704 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.704 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.704 02:01:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.704 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.704 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.704 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.704 02:01:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.704 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.704 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.704 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.704 02:01:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.704 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.704 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.704 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.704 02:01:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.704 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.704 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.704 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.704 02:01:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.704 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.704 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.704 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.704 02:01:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.704 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.704 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.704 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.704 02:01:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.704 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.704 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.704 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.704 02:01:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.704 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.704 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.704 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.704 02:01:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.704 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.704 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.704 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.704 02:01:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.704 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.704 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.704 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.704 02:01:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.704 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.704 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.704 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.704 02:01:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.704 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.704 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.704 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.704 02:01:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.704 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.704 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.704 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.704 02:01:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.704 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.704 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.704 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.704 02:01:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.704 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.704 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.704 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.704 02:01:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.704 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.704 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.704 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.704 02:01:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.704 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.704 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.704 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.704 02:01:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.704 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.704 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.704 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.704 02:01:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.704 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.704 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.704 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.704 02:01:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.704 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.704 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.704 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.704 02:01:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.704 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.704 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.704 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.704 02:01:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.704 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.704 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.704 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.704 02:01:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.704 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.704 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.704 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.704 02:01:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.704 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.704 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.704 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.704 02:01:38 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.704 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.704 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.704 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.704 02:01:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.704 02:01:38 -- setup/common.sh@33 -- # echo 0 00:04:23.704 02:01:38 -- setup/common.sh@33 -- # return 0 00:04:23.704 02:01:38 -- setup/hugepages.sh@99 -- # surp=0 00:04:23.704 02:01:38 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:23.704 02:01:38 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:23.704 02:01:38 -- setup/common.sh@18 -- # local node= 00:04:23.704 02:01:38 -- setup/common.sh@19 -- # local var val 00:04:23.704 02:01:38 -- setup/common.sh@20 -- # local mem_f mem 00:04:23.704 02:01:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.704 02:01:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.704 02:01:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.704 02:01:38 -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.704 02:01:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.704 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.705 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.705 02:01:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7563576 kB' 'MemAvailable: 9496076 kB' 'Buffers: 2436 kB' 'Cached: 2142312 kB' 'SwapCached: 0 kB' 'Active: 888764 kB' 'Inactive: 1375064 kB' 'Active(anon): 129544 kB' 'Inactive(anon): 0 kB' 'Active(file): 759220 kB' 'Inactive(file): 1375064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 120676 kB' 'Mapped: 49044 kB' 'Shmem: 10464 kB' 'KReclaimable: 70312 kB' 'Slab: 144800 kB' 'SReclaimable: 70312 kB' 'SUnreclaim: 74488 kB' 'KernelStack: 6368 kB' 'PageTables: 4360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 341692 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:23.705 02:01:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.705 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.705 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.705 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.705 02:01:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.705 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.705 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.705 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.705 02:01:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.705 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.705 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.705 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.705 02:01:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.705 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.705 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.705 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.705 02:01:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.705 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.705 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.705 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.705 02:01:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.705 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.705 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.705 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.705 02:01:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.705 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.705 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.705 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.705 02:01:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.705 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.705 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.705 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.705 02:01:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.705 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.705 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.705 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.705 02:01:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.705 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.705 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.705 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.705 02:01:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.705 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.705 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.705 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.705 02:01:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.705 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.705 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.705 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.705 02:01:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.705 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.705 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.705 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.705 02:01:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.705 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.705 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.705 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.705 02:01:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.705 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.705 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.705 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.705 02:01:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.705 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.705 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.705 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.705 02:01:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.705 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.705 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.705 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.705 02:01:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.705 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.705 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.705 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.705 02:01:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.705 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.705 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.705 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.705 02:01:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.705 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.705 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.705 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.705 02:01:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.705 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.705 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.705 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.705 02:01:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.705 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.705 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.705 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.705 02:01:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.705 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.705 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.705 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.705 02:01:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.705 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.705 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.705 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.705 02:01:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.705 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.706 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.706 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.706 02:01:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.706 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.706 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.706 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.706 02:01:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.706 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.706 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.706 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.706 02:01:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.706 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.706 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.706 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.706 02:01:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.706 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.706 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.706 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.706 02:01:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.706 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.706 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.706 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.706 02:01:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.706 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.706 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.706 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.706 02:01:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.706 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.706 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.706 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.706 02:01:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.706 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.706 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.706 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.706 02:01:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.706 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.706 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.706 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.706 02:01:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.706 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.706 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.706 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.706 02:01:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.706 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.706 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.706 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.706 02:01:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.706 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.706 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.706 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.706 02:01:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.706 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.706 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.706 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.706 02:01:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.706 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.706 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.706 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.706 02:01:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.706 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.706 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.706 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.706 02:01:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.706 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.706 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.706 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.706 02:01:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.706 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.706 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.706 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.706 02:01:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.706 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.706 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.706 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.706 02:01:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.706 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.706 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.706 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.706 02:01:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.706 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.706 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.706 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.706 02:01:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.706 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.706 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.706 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.706 02:01:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.706 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.706 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.706 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.706 02:01:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.706 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.706 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.706 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.706 02:01:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.706 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.706 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.706 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.706 02:01:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.706 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.706 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.706 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.706 02:01:38 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.706 02:01:38 -- setup/common.sh@33 -- # echo 0 00:04:23.706 02:01:38 -- setup/common.sh@33 -- # return 0 00:04:23.706 02:01:38 -- setup/hugepages.sh@100 -- # resv=0 00:04:23.706 nr_hugepages=1024 00:04:23.706 02:01:38 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:23.706 resv_hugepages=0 00:04:23.706 02:01:38 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:23.706 surplus_hugepages=0 00:04:23.706 02:01:38 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:23.706 anon_hugepages=0 00:04:23.706 02:01:38 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:23.707 02:01:38 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:23.707 02:01:38 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:23.707 02:01:38 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:23.707 02:01:38 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:23.707 02:01:38 -- setup/common.sh@18 -- # local node= 00:04:23.707 02:01:38 -- setup/common.sh@19 -- # local var val 00:04:23.707 02:01:38 -- setup/common.sh@20 -- # local mem_f mem 00:04:23.707 02:01:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.707 02:01:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.707 02:01:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.707 02:01:38 -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.707 02:01:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.707 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.707 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.707 02:01:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7564148 kB' 'MemAvailable: 9496648 kB' 'Buffers: 2436 kB' 'Cached: 2142312 kB' 'SwapCached: 0 kB' 'Active: 888268 kB' 'Inactive: 1375064 kB' 'Active(anon): 129048 kB' 'Inactive(anon): 0 kB' 'Active(file): 759220 kB' 'Inactive(file): 1375064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 120200 kB' 'Mapped: 48784 kB' 'Shmem: 10464 kB' 'KReclaimable: 70312 kB' 'Slab: 144788 kB' 'SReclaimable: 70312 kB' 'SUnreclaim: 74476 kB' 'KernelStack: 6336 kB' 'PageTables: 4268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 341692 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:23.707 02:01:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.707 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.707 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.707 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.707 02:01:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.707 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.707 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.707 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.707 02:01:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.707 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.707 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.707 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.707 02:01:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.707 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.707 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.707 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.707 02:01:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.707 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.707 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.707 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.707 02:01:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.707 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.707 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.707 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.707 02:01:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.707 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.707 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.707 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.707 02:01:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.707 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.707 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.707 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.707 02:01:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.707 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.707 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.707 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.707 02:01:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.707 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.707 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.707 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.707 02:01:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.707 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.707 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.707 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.707 02:01:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.707 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.707 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.707 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.707 02:01:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.707 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.707 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.707 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.707 02:01:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.707 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.707 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.707 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.707 02:01:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.707 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.707 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.707 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.708 02:01:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.708 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.708 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.708 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.708 02:01:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.708 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.708 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.708 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.708 02:01:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.708 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.708 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.708 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.708 02:01:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.708 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.708 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.708 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.708 02:01:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.708 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.708 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.708 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.708 02:01:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.708 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.708 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.708 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.708 02:01:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.708 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.708 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.708 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.708 02:01:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.708 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.708 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.708 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.708 02:01:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.708 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.708 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.708 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.708 02:01:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.708 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.708 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.708 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.708 02:01:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.708 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.708 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.708 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.708 02:01:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.708 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.708 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.708 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.708 02:01:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.708 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.708 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.708 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.708 02:01:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.708 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.708 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.708 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.708 02:01:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.708 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.708 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.708 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.708 02:01:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.708 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.708 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.708 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.708 02:01:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.708 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.708 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.708 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.708 02:01:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.708 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.708 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.708 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.708 02:01:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.708 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.708 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.708 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.708 02:01:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.708 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.708 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.708 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.708 02:01:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.708 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.708 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.708 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.708 02:01:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.708 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.708 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.708 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.708 02:01:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.708 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.708 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.708 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.708 02:01:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.708 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.708 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.708 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.708 02:01:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.708 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.708 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.708 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.708 02:01:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.708 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.708 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.708 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.708 02:01:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.708 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.708 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.708 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.708 02:01:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.708 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.708 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.708 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.708 02:01:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.708 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.708 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.708 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.709 02:01:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.709 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.709 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.709 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.709 02:01:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.709 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.709 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.709 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.709 02:01:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.709 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.709 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.709 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.709 02:01:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.709 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.709 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.709 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.709 02:01:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.709 02:01:38 -- setup/common.sh@33 -- # echo 1024 00:04:23.709 02:01:38 -- setup/common.sh@33 -- # return 0 00:04:23.709 02:01:38 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:23.709 02:01:38 -- setup/hugepages.sh@112 -- # get_nodes 00:04:23.709 02:01:38 -- setup/hugepages.sh@27 -- # local node 00:04:23.709 02:01:38 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:23.709 02:01:38 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:23.709 02:01:38 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:23.709 02:01:38 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:23.709 02:01:38 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:23.709 02:01:38 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:23.709 02:01:38 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:23.709 02:01:38 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:23.709 02:01:38 -- setup/common.sh@18 -- # local node=0 00:04:23.709 02:01:38 -- setup/common.sh@19 -- # local var val 00:04:23.709 02:01:38 -- setup/common.sh@20 -- # local mem_f mem 00:04:23.709 02:01:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.709 02:01:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:23.709 02:01:38 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:23.709 02:01:38 -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.709 02:01:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.709 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.709 02:01:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7564148 kB' 'MemUsed: 4677828 kB' 'SwapCached: 0 kB' 'Active: 888260 kB' 'Inactive: 1375064 kB' 'Active(anon): 129040 kB' 'Inactive(anon): 0 kB' 'Active(file): 759220 kB' 'Inactive(file): 1375064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'FilePages: 2144748 kB' 'Mapped: 48784 kB' 'AnonPages: 120452 kB' 'Shmem: 10464 kB' 'KernelStack: 6320 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70312 kB' 'Slab: 144788 kB' 'SReclaimable: 70312 kB' 'SUnreclaim: 74476 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:23.709 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.709 02:01:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.709 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.709 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.709 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.709 02:01:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.709 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.709 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.709 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.709 02:01:38 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.709 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.709 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.709 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.709 02:01:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.709 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.709 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.709 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.709 02:01:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.709 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.709 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.709 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.709 02:01:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.709 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.709 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.709 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.709 02:01:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.709 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.709 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.709 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.709 02:01:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.709 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.709 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.709 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.709 02:01:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.709 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.709 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.709 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.709 02:01:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.709 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.709 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.709 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.709 02:01:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.709 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.709 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.709 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.709 02:01:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.709 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.709 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.709 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.709 02:01:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.709 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.709 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.709 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.709 02:01:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.709 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.709 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.709 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.709 02:01:38 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.709 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.709 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.709 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.709 02:01:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.709 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.710 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.710 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.710 02:01:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.710 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.710 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.710 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.710 02:01:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.710 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.710 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.710 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.710 02:01:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.710 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.710 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.710 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.710 02:01:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.710 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.710 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.710 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.710 02:01:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.710 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.710 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.710 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.710 02:01:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.710 02:01:38 -- setup/common.sh@32 -- # continue 00:04:23.710 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.710 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.710 02:01:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.004 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.004 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.004 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.004 02:01:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.004 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.005 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.005 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.005 02:01:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.005 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.005 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.005 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.005 02:01:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.005 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.005 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.005 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.005 02:01:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.005 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.005 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.005 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.005 02:01:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.005 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.005 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.005 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.005 02:01:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.005 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.005 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.005 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.005 02:01:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.005 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.005 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.005 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.005 02:01:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.005 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.005 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.005 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.005 02:01:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.005 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.005 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.005 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.005 02:01:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.005 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.005 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.005 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.005 02:01:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.005 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.005 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.005 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.005 02:01:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.005 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.005 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.005 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.005 02:01:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.005 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.005 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.005 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.005 02:01:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.005 02:01:38 -- setup/common.sh@33 -- # echo 0 00:04:24.005 02:01:38 -- setup/common.sh@33 -- # return 0 00:04:24.005 02:01:38 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:24.005 02:01:38 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:24.005 02:01:38 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:24.005 02:01:38 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:24.005 node0=1024 expecting 1024 00:04:24.005 02:01:38 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:24.005 02:01:38 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:24.005 00:04:24.005 real 0m0.459s 00:04:24.005 user 0m0.236s 00:04:24.005 sys 0m0.252s 00:04:24.005 02:01:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:24.005 02:01:38 -- common/autotest_common.sh@10 -- # set +x 00:04:24.005 ************************************ 00:04:24.005 END TEST even_2G_alloc 00:04:24.005 ************************************ 00:04:24.005 02:01:38 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:24.005 02:01:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:24.005 02:01:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:24.005 02:01:38 -- common/autotest_common.sh@10 -- # set +x 00:04:24.005 ************************************ 00:04:24.005 START TEST odd_alloc 00:04:24.005 ************************************ 00:04:24.005 02:01:38 -- common/autotest_common.sh@1104 -- # odd_alloc 00:04:24.005 02:01:38 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:24.005 02:01:38 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:24.005 02:01:38 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:24.005 02:01:38 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:24.005 02:01:38 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:24.005 02:01:38 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:24.005 02:01:38 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:24.005 02:01:38 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:24.005 02:01:38 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:24.005 02:01:38 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:24.005 02:01:38 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:24.005 02:01:38 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:24.005 02:01:38 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:24.005 02:01:38 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:24.005 02:01:38 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:24.005 02:01:38 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:24.005 02:01:38 -- setup/hugepages.sh@83 -- # : 0 00:04:24.005 02:01:38 -- setup/hugepages.sh@84 -- # : 0 00:04:24.005 02:01:38 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:24.005 02:01:38 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:24.005 02:01:38 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:24.005 02:01:38 -- setup/hugepages.sh@160 -- # setup output 00:04:24.005 02:01:38 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:24.005 02:01:38 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:24.005 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:24.267 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:24.267 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:24.267 02:01:38 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:24.268 02:01:38 -- setup/hugepages.sh@89 -- # local node 00:04:24.268 02:01:38 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:24.268 02:01:38 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:24.268 02:01:38 -- setup/hugepages.sh@92 -- # local surp 00:04:24.268 02:01:38 -- setup/hugepages.sh@93 -- # local resv 00:04:24.268 02:01:38 -- setup/hugepages.sh@94 -- # local anon 00:04:24.268 02:01:38 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:24.268 02:01:38 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:24.268 02:01:38 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:24.268 02:01:38 -- setup/common.sh@18 -- # local node= 00:04:24.268 02:01:38 -- setup/common.sh@19 -- # local var val 00:04:24.268 02:01:38 -- setup/common.sh@20 -- # local mem_f mem 00:04:24.268 02:01:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.268 02:01:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.268 02:01:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.268 02:01:38 -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.268 02:01:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.268 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.268 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.268 02:01:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7565292 kB' 'MemAvailable: 9497792 kB' 'Buffers: 2436 kB' 'Cached: 2142312 kB' 'SwapCached: 0 kB' 'Active: 888492 kB' 'Inactive: 1375064 kB' 'Active(anon): 129272 kB' 'Inactive(anon): 0 kB' 'Active(file): 759220 kB' 'Inactive(file): 1375064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 120640 kB' 'Mapped: 48904 kB' 'Shmem: 10464 kB' 'KReclaimable: 70312 kB' 'Slab: 144784 kB' 'SReclaimable: 70312 kB' 'SUnreclaim: 74472 kB' 'KernelStack: 6384 kB' 'PageTables: 4420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 341692 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:24.268 02:01:38 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.268 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.268 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.268 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.268 02:01:38 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.268 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.268 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.268 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.268 02:01:38 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.268 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.268 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.268 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.268 02:01:38 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.268 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.268 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.268 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.268 02:01:38 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.268 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.268 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.268 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.268 02:01:38 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.268 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.268 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.268 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.268 02:01:38 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.268 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.268 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.268 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.268 02:01:38 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.268 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.268 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.268 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.268 02:01:38 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.268 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.268 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.268 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.268 02:01:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.268 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.268 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.268 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.268 02:01:38 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.268 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.268 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.268 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.268 02:01:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.268 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.268 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.268 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.268 02:01:38 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.268 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.268 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.268 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.268 02:01:38 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.268 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.268 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.268 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.268 02:01:38 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.268 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.268 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.268 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.268 02:01:38 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.268 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.268 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.268 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.268 02:01:38 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.268 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.268 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.268 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.268 02:01:38 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.268 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.268 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.268 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.268 02:01:38 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.268 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.268 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.268 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.268 02:01:38 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.268 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.268 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.268 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.268 02:01:38 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.268 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.268 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.268 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.268 02:01:38 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.268 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.268 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.268 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.268 02:01:38 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.268 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.268 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.268 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.268 02:01:38 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.268 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.268 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.268 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.268 02:01:38 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.268 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.268 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.268 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.268 02:01:38 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.268 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.268 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.268 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.268 02:01:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.269 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.269 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.269 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.269 02:01:38 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.269 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.269 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.269 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.269 02:01:38 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.269 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.269 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.269 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.269 02:01:38 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.269 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.269 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.269 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.269 02:01:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.269 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.269 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.269 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.269 02:01:38 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.269 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.269 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.269 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.269 02:01:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.269 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.269 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.269 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.269 02:01:38 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.269 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.269 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.269 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.269 02:01:38 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.269 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.269 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.269 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.269 02:01:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.269 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.269 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.269 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.269 02:01:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.269 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.269 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.269 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.269 02:01:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.269 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.269 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.269 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.269 02:01:38 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.269 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.269 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.269 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.269 02:01:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.269 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.269 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.269 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.269 02:01:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.269 02:01:38 -- setup/common.sh@33 -- # echo 0 00:04:24.269 02:01:38 -- setup/common.sh@33 -- # return 0 00:04:24.269 02:01:38 -- setup/hugepages.sh@97 -- # anon=0 00:04:24.269 02:01:38 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:24.269 02:01:38 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:24.269 02:01:38 -- setup/common.sh@18 -- # local node= 00:04:24.269 02:01:38 -- setup/common.sh@19 -- # local var val 00:04:24.269 02:01:38 -- setup/common.sh@20 -- # local mem_f mem 00:04:24.269 02:01:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.269 02:01:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.269 02:01:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.269 02:01:38 -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.269 02:01:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.269 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.269 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.269 02:01:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7565136 kB' 'MemAvailable: 9497636 kB' 'Buffers: 2436 kB' 'Cached: 2142312 kB' 'SwapCached: 0 kB' 'Active: 888324 kB' 'Inactive: 1375064 kB' 'Active(anon): 129104 kB' 'Inactive(anon): 0 kB' 'Active(file): 759220 kB' 'Inactive(file): 1375064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 120496 kB' 'Mapped: 48784 kB' 'Shmem: 10464 kB' 'KReclaimable: 70312 kB' 'Slab: 144780 kB' 'SReclaimable: 70312 kB' 'SUnreclaim: 74468 kB' 'KernelStack: 6368 kB' 'PageTables: 4368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 341692 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:24.269 02:01:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.269 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.269 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.269 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.269 02:01:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.269 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.269 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.269 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.269 02:01:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.269 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.269 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.269 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.269 02:01:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.269 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.269 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.269 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.269 02:01:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.269 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.269 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.269 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.269 02:01:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.269 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.269 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.269 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.269 02:01:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.269 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.269 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.269 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.269 02:01:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.269 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.269 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.269 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.269 02:01:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.269 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.269 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.269 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.269 02:01:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.269 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.269 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.269 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.269 02:01:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.269 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.269 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.269 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.269 02:01:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.269 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.269 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.269 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.269 02:01:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.269 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.269 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.269 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.269 02:01:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.269 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.269 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.269 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.269 02:01:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.270 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.270 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.270 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.270 02:01:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.270 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.270 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.270 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.270 02:01:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.270 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.270 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.270 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.270 02:01:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.270 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.270 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.270 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.270 02:01:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.270 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.270 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.270 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.270 02:01:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.270 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.270 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.270 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.270 02:01:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.270 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.270 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.270 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.270 02:01:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.270 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.270 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.270 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.270 02:01:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.270 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.270 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.270 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.270 02:01:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.270 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.270 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.270 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.270 02:01:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.270 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.270 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.270 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.270 02:01:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.270 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.270 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.270 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.270 02:01:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.270 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.270 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.270 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.270 02:01:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.270 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.270 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.270 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.270 02:01:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.270 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.270 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.270 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.270 02:01:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.270 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.270 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.270 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.270 02:01:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.270 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.270 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.270 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.270 02:01:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.270 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.270 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.270 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.270 02:01:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.270 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.270 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.270 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.270 02:01:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.270 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.270 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.270 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.270 02:01:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.270 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.270 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.270 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.270 02:01:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.270 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.270 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.270 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.270 02:01:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.270 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.270 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.270 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.270 02:01:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.270 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.270 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.270 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.270 02:01:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.270 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.270 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.270 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.270 02:01:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.270 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.270 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.270 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.270 02:01:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.270 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.270 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.270 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.270 02:01:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.270 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.270 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.270 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.270 02:01:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.270 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.270 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.270 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.270 02:01:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.270 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.270 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.270 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.270 02:01:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.270 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.270 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.270 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.270 02:01:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.270 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.270 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.270 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.270 02:01:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.270 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.270 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.270 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.270 02:01:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.270 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.270 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.270 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.270 02:01:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.270 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.270 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.270 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.270 02:01:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.270 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.270 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.270 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.271 02:01:38 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.271 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.271 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.271 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.271 02:01:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.271 02:01:38 -- setup/common.sh@33 -- # echo 0 00:04:24.271 02:01:38 -- setup/common.sh@33 -- # return 0 00:04:24.271 02:01:38 -- setup/hugepages.sh@99 -- # surp=0 00:04:24.271 02:01:38 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:24.271 02:01:38 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:24.271 02:01:38 -- setup/common.sh@18 -- # local node= 00:04:24.271 02:01:38 -- setup/common.sh@19 -- # local var val 00:04:24.271 02:01:38 -- setup/common.sh@20 -- # local mem_f mem 00:04:24.271 02:01:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.271 02:01:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.271 02:01:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.271 02:01:38 -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.271 02:01:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.271 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.271 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.271 02:01:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7565136 kB' 'MemAvailable: 9497636 kB' 'Buffers: 2436 kB' 'Cached: 2142312 kB' 'SwapCached: 0 kB' 'Active: 888320 kB' 'Inactive: 1375064 kB' 'Active(anon): 129100 kB' 'Inactive(anon): 0 kB' 'Active(file): 759220 kB' 'Inactive(file): 1375064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 120492 kB' 'Mapped: 48784 kB' 'Shmem: 10464 kB' 'KReclaimable: 70312 kB' 'Slab: 144776 kB' 'SReclaimable: 70312 kB' 'SUnreclaim: 74464 kB' 'KernelStack: 6368 kB' 'PageTables: 4368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 341692 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:24.271 02:01:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.271 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.271 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.271 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.271 02:01:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.271 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.271 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.271 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.271 02:01:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.271 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.271 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.271 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.271 02:01:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.271 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.271 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.271 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.271 02:01:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.271 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.271 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.271 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.271 02:01:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.271 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.271 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.271 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.271 02:01:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.271 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.271 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.271 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.271 02:01:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.271 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.271 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.271 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.271 02:01:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.271 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.271 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.271 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.271 02:01:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.271 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.271 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.271 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.271 02:01:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.271 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.271 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.271 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.271 02:01:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.271 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.271 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.271 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.271 02:01:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.271 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.271 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.271 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.271 02:01:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.271 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.271 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.271 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.271 02:01:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.271 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.271 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.271 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.271 02:01:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.271 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.271 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.271 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.271 02:01:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.271 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.271 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.271 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.271 02:01:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.271 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.271 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.271 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.271 02:01:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.271 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.271 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.271 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.271 02:01:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.271 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.271 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.271 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.271 02:01:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.271 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.271 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.271 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.271 02:01:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.271 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.271 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.271 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.271 02:01:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.271 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.271 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.271 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.271 02:01:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.271 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.271 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.271 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.271 02:01:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.271 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.271 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.271 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.271 02:01:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.271 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.271 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.271 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.271 02:01:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.271 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.271 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.271 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.272 02:01:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.272 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.272 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.272 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.272 02:01:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.272 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.272 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.272 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.272 02:01:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.272 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.272 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.272 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.272 02:01:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.272 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.272 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.272 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.272 02:01:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.272 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.272 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.272 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.272 02:01:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.272 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.272 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.272 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.272 02:01:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.272 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.272 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.272 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.272 02:01:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.272 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.272 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.272 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.272 02:01:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.272 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.272 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.272 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.272 02:01:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.272 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.272 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.272 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.272 02:01:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.272 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.272 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.272 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.272 02:01:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.272 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.272 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.272 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.272 02:01:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.272 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.272 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.272 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.272 02:01:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.272 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.272 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.272 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.272 02:01:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.272 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.272 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.272 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.272 02:01:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.272 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.272 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.272 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.272 02:01:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.272 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.272 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.272 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.272 02:01:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.272 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.272 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.272 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.272 02:01:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.272 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.272 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.272 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.272 02:01:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.272 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.272 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.272 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.272 02:01:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.272 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.272 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.272 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.272 02:01:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.272 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.272 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.272 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.272 02:01:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.272 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.272 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.272 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.272 02:01:38 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.272 02:01:38 -- setup/common.sh@33 -- # echo 0 00:04:24.272 02:01:38 -- setup/common.sh@33 -- # return 0 00:04:24.272 02:01:38 -- setup/hugepages.sh@100 -- # resv=0 00:04:24.272 nr_hugepages=1025 00:04:24.272 02:01:38 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:24.272 resv_hugepages=0 00:04:24.272 02:01:38 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:24.272 surplus_hugepages=0 00:04:24.272 02:01:38 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:24.272 anon_hugepages=0 00:04:24.272 02:01:38 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:24.272 02:01:38 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:24.272 02:01:38 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:24.272 02:01:38 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:24.272 02:01:38 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:24.272 02:01:38 -- setup/common.sh@18 -- # local node= 00:04:24.272 02:01:38 -- setup/common.sh@19 -- # local var val 00:04:24.272 02:01:38 -- setup/common.sh@20 -- # local mem_f mem 00:04:24.272 02:01:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.272 02:01:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.272 02:01:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.272 02:01:38 -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.272 02:01:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.272 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.272 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.272 02:01:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7565396 kB' 'MemAvailable: 9497896 kB' 'Buffers: 2436 kB' 'Cached: 2142312 kB' 'SwapCached: 0 kB' 'Active: 888144 kB' 'Inactive: 1375064 kB' 'Active(anon): 128924 kB' 'Inactive(anon): 0 kB' 'Active(file): 759220 kB' 'Inactive(file): 1375064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 120080 kB' 'Mapped: 49044 kB' 'Shmem: 10464 kB' 'KReclaimable: 70312 kB' 'Slab: 144776 kB' 'SReclaimable: 70312 kB' 'SUnreclaim: 74464 kB' 'KernelStack: 6336 kB' 'PageTables: 4272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 341324 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:24.272 02:01:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.272 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.272 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.272 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.272 02:01:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.272 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.272 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.272 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.272 02:01:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.272 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.273 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.273 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.273 02:01:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.273 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.273 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.273 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.273 02:01:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.273 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.273 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.273 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.273 02:01:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.273 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.273 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.273 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.273 02:01:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.273 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.273 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.273 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.273 02:01:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.273 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.273 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.273 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.273 02:01:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.273 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.273 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.273 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.273 02:01:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.273 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.273 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.273 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.273 02:01:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.273 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.273 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.273 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.273 02:01:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.273 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.273 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.273 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.273 02:01:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.273 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.273 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.273 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.273 02:01:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.273 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.273 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.273 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.273 02:01:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.273 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.273 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.273 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.273 02:01:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.273 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.273 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.273 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.273 02:01:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.273 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.273 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.273 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.273 02:01:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.273 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.273 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.273 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.273 02:01:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.273 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.273 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.273 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.273 02:01:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.273 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.273 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.273 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.273 02:01:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.273 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.273 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.273 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.273 02:01:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.273 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.273 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.273 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.273 02:01:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.273 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.273 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.273 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.273 02:01:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.273 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.273 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.273 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.273 02:01:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.273 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.273 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.273 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.273 02:01:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.273 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.273 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.273 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.273 02:01:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.273 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.273 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.273 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.273 02:01:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.273 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.273 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.273 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.273 02:01:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.273 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.273 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.273 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.273 02:01:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.273 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.273 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.273 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.273 02:01:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.273 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.273 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.273 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.273 02:01:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.273 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.273 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.274 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.274 02:01:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.274 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.274 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.274 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.274 02:01:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.274 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.274 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.274 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.274 02:01:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.274 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.274 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.274 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.274 02:01:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.274 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.274 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.274 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.274 02:01:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.274 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.274 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.274 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.274 02:01:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.274 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.274 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.274 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.274 02:01:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.274 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.274 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.274 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.274 02:01:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.274 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.274 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.274 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.274 02:01:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.274 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.274 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.274 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.274 02:01:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.274 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.274 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.274 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.274 02:01:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.274 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.274 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.274 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.274 02:01:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.274 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.274 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.274 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.274 02:01:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.274 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.274 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.274 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.274 02:01:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.274 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.274 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.274 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.274 02:01:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.274 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.274 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.274 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.274 02:01:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.274 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.274 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.274 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.274 02:01:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.274 02:01:38 -- setup/common.sh@33 -- # echo 1025 00:04:24.274 02:01:38 -- setup/common.sh@33 -- # return 0 00:04:24.274 02:01:38 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:24.274 02:01:38 -- setup/hugepages.sh@112 -- # get_nodes 00:04:24.274 02:01:38 -- setup/hugepages.sh@27 -- # local node 00:04:24.274 02:01:38 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:24.274 02:01:38 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:24.274 02:01:38 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:24.274 02:01:38 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:24.274 02:01:38 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:24.274 02:01:38 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:24.274 02:01:38 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:24.274 02:01:38 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:24.274 02:01:38 -- setup/common.sh@18 -- # local node=0 00:04:24.274 02:01:38 -- setup/common.sh@19 -- # local var val 00:04:24.274 02:01:38 -- setup/common.sh@20 -- # local mem_f mem 00:04:24.274 02:01:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.274 02:01:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:24.274 02:01:38 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:24.274 02:01:38 -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.274 02:01:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.274 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.274 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.274 02:01:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7565320 kB' 'MemUsed: 4676656 kB' 'SwapCached: 0 kB' 'Active: 888284 kB' 'Inactive: 1375064 kB' 'Active(anon): 129064 kB' 'Inactive(anon): 0 kB' 'Active(file): 759220 kB' 'Inactive(file): 1375064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'FilePages: 2144748 kB' 'Mapped: 48784 kB' 'AnonPages: 120508 kB' 'Shmem: 10464 kB' 'KernelStack: 6368 kB' 'PageTables: 4368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70312 kB' 'Slab: 144768 kB' 'SReclaimable: 70312 kB' 'SUnreclaim: 74456 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:24.274 02:01:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.274 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.274 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.274 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.274 02:01:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.274 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.274 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.274 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.274 02:01:38 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.274 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.274 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.274 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.274 02:01:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.274 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.274 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.274 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.274 02:01:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.274 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.274 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.274 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.274 02:01:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.274 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.274 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.274 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.274 02:01:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.274 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.274 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.274 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.274 02:01:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.274 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.274 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.274 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.274 02:01:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.274 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.274 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.274 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.274 02:01:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.274 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.274 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.274 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.275 02:01:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.275 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.275 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.275 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.275 02:01:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.275 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.275 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.275 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.275 02:01:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.275 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.275 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.275 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.275 02:01:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.275 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.275 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.275 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.275 02:01:38 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.275 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.275 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.275 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.275 02:01:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.275 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.275 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.275 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.275 02:01:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.275 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.275 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.275 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.275 02:01:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.275 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.275 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.275 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.275 02:01:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.275 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.275 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.275 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.275 02:01:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.275 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.275 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.275 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.275 02:01:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.275 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.275 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.275 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.275 02:01:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.275 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.275 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.275 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.275 02:01:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.275 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.275 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.275 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.275 02:01:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.275 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.275 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.275 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.275 02:01:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.275 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.275 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.275 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.275 02:01:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.275 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.275 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.275 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.275 02:01:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.275 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.275 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.275 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.275 02:01:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.275 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.275 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.275 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.275 02:01:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.275 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.275 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.275 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.275 02:01:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.275 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.275 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.275 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.275 02:01:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.275 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.275 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.275 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.275 02:01:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.275 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.275 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.275 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.275 02:01:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.275 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.275 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.275 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.275 02:01:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.275 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.275 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.275 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.275 02:01:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.275 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.275 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.275 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.275 02:01:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.275 02:01:38 -- setup/common.sh@32 -- # continue 00:04:24.275 02:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.275 02:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.275 02:01:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.275 02:01:38 -- setup/common.sh@33 -- # echo 0 00:04:24.275 02:01:38 -- setup/common.sh@33 -- # return 0 00:04:24.275 02:01:38 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:24.275 02:01:38 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:24.275 02:01:38 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:24.275 02:01:38 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:24.275 node0=1025 expecting 1025 00:04:24.275 02:01:38 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:24.275 02:01:38 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:24.275 00:04:24.275 real 0m0.466s 00:04:24.275 user 0m0.269s 00:04:24.275 sys 0m0.226s 00:04:24.275 02:01:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:24.275 02:01:38 -- common/autotest_common.sh@10 -- # set +x 00:04:24.275 ************************************ 00:04:24.275 END TEST odd_alloc 00:04:24.275 ************************************ 00:04:24.275 02:01:38 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:24.275 02:01:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:24.275 02:01:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:24.275 02:01:38 -- common/autotest_common.sh@10 -- # set +x 00:04:24.275 ************************************ 00:04:24.275 START TEST custom_alloc 00:04:24.275 ************************************ 00:04:24.275 02:01:38 -- common/autotest_common.sh@1104 -- # custom_alloc 00:04:24.275 02:01:38 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:24.275 02:01:38 -- setup/hugepages.sh@169 -- # local node 00:04:24.275 02:01:38 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:24.275 02:01:38 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:24.275 02:01:38 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:24.275 02:01:38 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:24.275 02:01:38 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:24.275 02:01:38 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:24.275 02:01:38 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:24.275 02:01:38 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:24.275 02:01:38 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:24.275 02:01:38 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:24.275 02:01:38 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:24.275 02:01:38 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:24.275 02:01:38 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:24.275 02:01:38 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:24.276 02:01:38 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:24.276 02:01:38 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:24.276 02:01:38 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:24.276 02:01:38 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:24.276 02:01:38 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:24.276 02:01:38 -- setup/hugepages.sh@83 -- # : 0 00:04:24.276 02:01:38 -- setup/hugepages.sh@84 -- # : 0 00:04:24.276 02:01:38 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:24.276 02:01:38 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:24.276 02:01:38 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:24.276 02:01:38 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:24.276 02:01:38 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:24.276 02:01:38 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:24.276 02:01:38 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:24.276 02:01:38 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:24.276 02:01:38 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:24.276 02:01:38 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:24.276 02:01:38 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:24.276 02:01:38 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:24.276 02:01:38 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:24.276 02:01:38 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:24.276 02:01:38 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:24.276 02:01:38 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:24.276 02:01:38 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:24.276 02:01:38 -- setup/hugepages.sh@78 -- # return 0 00:04:24.276 02:01:38 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:24.276 02:01:38 -- setup/hugepages.sh@187 -- # setup output 00:04:24.276 02:01:38 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:24.276 02:01:38 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:24.534 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:24.796 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:24.796 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:24.796 02:01:39 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:24.796 02:01:39 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:24.796 02:01:39 -- setup/hugepages.sh@89 -- # local node 00:04:24.796 02:01:39 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:24.796 02:01:39 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:24.796 02:01:39 -- setup/hugepages.sh@92 -- # local surp 00:04:24.796 02:01:39 -- setup/hugepages.sh@93 -- # local resv 00:04:24.796 02:01:39 -- setup/hugepages.sh@94 -- # local anon 00:04:24.796 02:01:39 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:24.796 02:01:39 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:24.796 02:01:39 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:24.796 02:01:39 -- setup/common.sh@18 -- # local node= 00:04:24.796 02:01:39 -- setup/common.sh@19 -- # local var val 00:04:24.796 02:01:39 -- setup/common.sh@20 -- # local mem_f mem 00:04:24.796 02:01:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.796 02:01:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.796 02:01:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.796 02:01:39 -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.796 02:01:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.796 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.796 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.796 02:01:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8616836 kB' 'MemAvailable: 10549336 kB' 'Buffers: 2436 kB' 'Cached: 2142312 kB' 'SwapCached: 0 kB' 'Active: 889196 kB' 'Inactive: 1375064 kB' 'Active(anon): 129976 kB' 'Inactive(anon): 0 kB' 'Active(file): 759220 kB' 'Inactive(file): 1375064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 121052 kB' 'Mapped: 48888 kB' 'Shmem: 10464 kB' 'KReclaimable: 70312 kB' 'Slab: 144760 kB' 'SReclaimable: 70312 kB' 'SUnreclaim: 74448 kB' 'KernelStack: 6408 kB' 'PageTables: 4372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 341692 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:24.796 02:01:39 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.796 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.796 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.796 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.796 02:01:39 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.796 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.796 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.796 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.796 02:01:39 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.796 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.796 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.796 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.796 02:01:39 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.796 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.796 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.796 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.796 02:01:39 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.796 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.796 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.796 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.796 02:01:39 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.796 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.796 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.796 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.796 02:01:39 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.796 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.796 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.796 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.796 02:01:39 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.796 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.796 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.796 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.796 02:01:39 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.796 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.796 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.796 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.796 02:01:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.796 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.796 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.796 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.796 02:01:39 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.796 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.796 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.796 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.796 02:01:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.796 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.796 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.796 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.796 02:01:39 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.796 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.796 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.797 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.797 02:01:39 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.797 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.797 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.797 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.797 02:01:39 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.797 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.797 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.797 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.797 02:01:39 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.797 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.797 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.797 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.797 02:01:39 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.797 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.797 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.797 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.797 02:01:39 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.797 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.797 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.797 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.797 02:01:39 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.797 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.797 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.797 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.797 02:01:39 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.797 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.797 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.797 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.797 02:01:39 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.797 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.797 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.797 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.797 02:01:39 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.797 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.797 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.797 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.797 02:01:39 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.797 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.797 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.797 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.797 02:01:39 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.797 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.797 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.797 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.797 02:01:39 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.797 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.797 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.797 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.797 02:01:39 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.797 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.797 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.797 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.797 02:01:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.797 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.797 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.797 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.797 02:01:39 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.797 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.797 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.797 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.797 02:01:39 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.797 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.797 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.797 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.797 02:01:39 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.797 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.797 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.797 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.797 02:01:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.797 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.797 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.797 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.797 02:01:39 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.797 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.797 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.797 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.797 02:01:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.797 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.797 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.797 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.797 02:01:39 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.797 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.797 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.797 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.797 02:01:39 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.797 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.797 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.797 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.797 02:01:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.797 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.797 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.797 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.797 02:01:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.797 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.797 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.797 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.797 02:01:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.797 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.797 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.797 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.797 02:01:39 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.797 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.797 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.797 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.797 02:01:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.797 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.797 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.797 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.797 02:01:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.797 02:01:39 -- setup/common.sh@33 -- # echo 0 00:04:24.797 02:01:39 -- setup/common.sh@33 -- # return 0 00:04:24.797 02:01:39 -- setup/hugepages.sh@97 -- # anon=0 00:04:24.797 02:01:39 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:24.797 02:01:39 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:24.797 02:01:39 -- setup/common.sh@18 -- # local node= 00:04:24.797 02:01:39 -- setup/common.sh@19 -- # local var val 00:04:24.797 02:01:39 -- setup/common.sh@20 -- # local mem_f mem 00:04:24.797 02:01:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.797 02:01:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.797 02:01:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.797 02:01:39 -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.797 02:01:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.797 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.797 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.797 02:01:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8617040 kB' 'MemAvailable: 10549540 kB' 'Buffers: 2436 kB' 'Cached: 2142312 kB' 'SwapCached: 0 kB' 'Active: 888492 kB' 'Inactive: 1375064 kB' 'Active(anon): 129272 kB' 'Inactive(anon): 0 kB' 'Active(file): 759220 kB' 'Inactive(file): 1375064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 120656 kB' 'Mapped: 48888 kB' 'Shmem: 10464 kB' 'KReclaimable: 70312 kB' 'Slab: 144768 kB' 'SReclaimable: 70312 kB' 'SUnreclaim: 74456 kB' 'KernelStack: 6344 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 341692 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:24.797 02:01:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.797 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.797 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.798 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.798 02:01:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.798 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.798 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.798 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.798 02:01:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.798 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.798 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.798 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.798 02:01:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.798 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.798 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.798 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.798 02:01:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.798 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.798 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.798 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.798 02:01:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.798 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.798 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.798 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.798 02:01:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.798 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.798 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.798 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.798 02:01:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.798 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.798 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.798 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.798 02:01:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.798 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.798 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.798 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.798 02:01:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.798 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.798 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.798 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.798 02:01:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.798 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.798 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.798 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.798 02:01:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.798 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.798 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.798 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.798 02:01:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.798 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.798 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.798 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.798 02:01:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.798 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.798 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.798 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.798 02:01:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.798 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.798 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.798 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.798 02:01:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.798 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.798 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.798 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.798 02:01:39 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.798 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.798 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.798 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.798 02:01:39 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.798 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.798 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.798 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.798 02:01:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.798 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.798 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.798 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.798 02:01:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.798 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.798 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.798 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.798 02:01:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.798 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.798 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.798 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.798 02:01:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.798 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.798 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.798 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.798 02:01:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.798 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.798 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.798 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.798 02:01:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.798 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.798 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.798 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.798 02:01:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.798 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.798 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.798 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.798 02:01:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.798 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.798 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.798 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.798 02:01:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.798 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.798 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.798 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.798 02:01:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.798 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.798 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.798 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.798 02:01:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.798 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.798 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.798 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.798 02:01:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.798 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.798 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.798 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.798 02:01:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.798 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.798 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.798 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.798 02:01:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.798 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.798 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.798 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.798 02:01:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.798 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.798 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.798 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.798 02:01:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.798 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.798 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.798 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.798 02:01:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.798 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.798 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.798 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.798 02:01:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.798 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.798 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.798 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.798 02:01:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.798 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.798 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.798 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.799 02:01:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.799 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.799 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.799 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.799 02:01:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.799 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.799 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.799 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.799 02:01:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.799 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.799 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.799 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.799 02:01:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.799 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.799 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.799 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.799 02:01:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.799 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.799 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.799 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.799 02:01:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.799 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.799 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.799 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.799 02:01:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.799 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.799 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.799 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.799 02:01:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.799 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.799 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.799 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.799 02:01:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.799 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.799 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.799 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.799 02:01:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.799 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.799 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.799 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.799 02:01:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.799 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.799 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.799 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.799 02:01:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.799 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.799 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.799 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.799 02:01:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.799 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.799 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.799 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.799 02:01:39 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.799 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.799 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.799 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.799 02:01:39 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.799 02:01:39 -- setup/common.sh@33 -- # echo 0 00:04:24.799 02:01:39 -- setup/common.sh@33 -- # return 0 00:04:24.799 02:01:39 -- setup/hugepages.sh@99 -- # surp=0 00:04:24.799 02:01:39 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:24.799 02:01:39 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:24.799 02:01:39 -- setup/common.sh@18 -- # local node= 00:04:24.799 02:01:39 -- setup/common.sh@19 -- # local var val 00:04:24.799 02:01:39 -- setup/common.sh@20 -- # local mem_f mem 00:04:24.799 02:01:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.799 02:01:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.799 02:01:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.799 02:01:39 -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.799 02:01:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.799 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.799 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.799 02:01:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8617040 kB' 'MemAvailable: 10549540 kB' 'Buffers: 2436 kB' 'Cached: 2142312 kB' 'SwapCached: 0 kB' 'Active: 888460 kB' 'Inactive: 1375064 kB' 'Active(anon): 129240 kB' 'Inactive(anon): 0 kB' 'Active(file): 759220 kB' 'Inactive(file): 1375064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 120668 kB' 'Mapped: 48784 kB' 'Shmem: 10464 kB' 'KReclaimable: 70312 kB' 'Slab: 144772 kB' 'SReclaimable: 70312 kB' 'SUnreclaim: 74460 kB' 'KernelStack: 6368 kB' 'PageTables: 4368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 344840 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:24.799 02:01:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.799 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.799 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.799 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.799 02:01:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.799 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.799 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.799 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.799 02:01:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.799 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.799 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.799 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.799 02:01:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.799 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.799 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.799 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.799 02:01:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.799 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.799 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.799 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.799 02:01:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.799 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.799 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.799 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.799 02:01:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.799 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.799 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.799 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.799 02:01:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.799 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.799 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.799 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.799 02:01:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.799 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.799 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.799 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.799 02:01:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.799 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.799 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.799 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.799 02:01:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.799 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.799 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.799 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.799 02:01:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.799 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.799 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.799 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.799 02:01:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.799 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.799 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.799 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.799 02:01:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.799 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.799 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.799 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.800 02:01:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.800 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.800 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.800 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.800 02:01:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.800 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.800 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.800 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.800 02:01:39 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.800 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.800 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.800 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.800 02:01:39 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.800 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.800 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.800 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.800 02:01:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.800 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.800 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.800 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.800 02:01:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.800 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.800 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.800 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.800 02:01:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.800 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.800 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.800 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.800 02:01:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.800 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.800 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.800 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.800 02:01:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.800 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.800 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.800 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.800 02:01:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.800 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.800 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.800 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.800 02:01:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.800 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.800 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.800 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.800 02:01:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.800 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.800 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.800 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.800 02:01:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.800 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.800 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.800 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.800 02:01:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.800 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.800 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.800 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.800 02:01:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.800 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.800 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.800 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.800 02:01:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.800 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.800 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.800 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.800 02:01:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.800 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.800 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.800 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.800 02:01:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.800 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.800 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.800 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.800 02:01:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.800 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.800 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.800 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.800 02:01:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.800 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.800 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.800 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.800 02:01:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.800 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.800 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.800 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.800 02:01:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.800 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.800 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.800 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.800 02:01:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.800 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.800 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.800 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.800 02:01:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.800 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.800 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.800 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.800 02:01:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.800 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.800 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.800 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.800 02:01:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.800 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.800 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.800 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.800 02:01:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.800 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.800 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.800 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.800 02:01:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.800 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.800 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.800 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.800 02:01:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.800 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.800 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.800 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.800 02:01:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.800 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.800 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.800 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.800 02:01:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.800 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.800 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.800 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.800 02:01:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.800 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.800 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.800 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.800 02:01:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.800 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.800 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.800 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.800 02:01:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.800 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.800 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.800 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.800 02:01:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.800 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.800 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.800 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.800 02:01:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.800 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.800 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.800 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.801 02:01:39 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.801 02:01:39 -- setup/common.sh@33 -- # echo 0 00:04:24.801 02:01:39 -- setup/common.sh@33 -- # return 0 00:04:24.801 02:01:39 -- setup/hugepages.sh@100 -- # resv=0 00:04:24.801 02:01:39 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:24.801 nr_hugepages=512 00:04:24.801 resv_hugepages=0 00:04:24.801 02:01:39 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:24.801 surplus_hugepages=0 00:04:24.801 02:01:39 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:24.801 anon_hugepages=0 00:04:24.801 02:01:39 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:24.801 02:01:39 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:24.801 02:01:39 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:24.801 02:01:39 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:24.801 02:01:39 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:24.801 02:01:39 -- setup/common.sh@18 -- # local node= 00:04:24.801 02:01:39 -- setup/common.sh@19 -- # local var val 00:04:24.801 02:01:39 -- setup/common.sh@20 -- # local mem_f mem 00:04:24.801 02:01:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.801 02:01:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.801 02:01:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.801 02:01:39 -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.801 02:01:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.801 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.801 02:01:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8617300 kB' 'MemAvailable: 10549800 kB' 'Buffers: 2436 kB' 'Cached: 2142312 kB' 'SwapCached: 0 kB' 'Active: 889060 kB' 'Inactive: 1375064 kB' 'Active(anon): 129840 kB' 'Inactive(anon): 0 kB' 'Active(file): 759220 kB' 'Inactive(file): 1375064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 121068 kB' 'Mapped: 48784 kB' 'Shmem: 10464 kB' 'KReclaimable: 70312 kB' 'Slab: 144768 kB' 'SReclaimable: 70312 kB' 'SUnreclaim: 74456 kB' 'KernelStack: 6368 kB' 'PageTables: 4388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 341692 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:24.801 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.801 02:01:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.801 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.801 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.801 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.801 02:01:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.801 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.801 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.801 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.801 02:01:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.801 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.801 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.801 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.801 02:01:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.801 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.801 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.801 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.801 02:01:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.801 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.801 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.801 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.801 02:01:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.801 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.801 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.801 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.801 02:01:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.801 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.801 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.801 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.801 02:01:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.801 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.801 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.801 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.801 02:01:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.801 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.801 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.801 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.801 02:01:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.801 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.801 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.801 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.801 02:01:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.801 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.801 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.801 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.801 02:01:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.801 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.801 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.801 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.801 02:01:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.801 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.801 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.801 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.801 02:01:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.801 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.801 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.801 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.801 02:01:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.801 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.801 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.801 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.801 02:01:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.801 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.801 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.801 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.801 02:01:39 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.801 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.801 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.801 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.801 02:01:39 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.801 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.801 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.801 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.801 02:01:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.801 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.801 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.801 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.801 02:01:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.802 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.802 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.802 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.802 02:01:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.802 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.802 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.802 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.802 02:01:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.802 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.802 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.802 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.802 02:01:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.802 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.802 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.802 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.802 02:01:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.802 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.802 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.802 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.802 02:01:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.802 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.802 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.802 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.802 02:01:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.802 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.802 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.802 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.802 02:01:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.802 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.802 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.802 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.802 02:01:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.802 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.802 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.802 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.802 02:01:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.802 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.802 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.802 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.802 02:01:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.802 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.802 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.802 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.802 02:01:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.802 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.802 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.802 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.802 02:01:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.802 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.802 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.802 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.802 02:01:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.802 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.802 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.802 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.802 02:01:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.802 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.802 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.802 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.802 02:01:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.802 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.802 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.802 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.802 02:01:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.802 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.802 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.802 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.802 02:01:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.802 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.802 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.802 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.802 02:01:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.802 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.802 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.802 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.802 02:01:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.802 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.802 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.802 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.802 02:01:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.802 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.802 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.802 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.802 02:01:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.802 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.802 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.802 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.802 02:01:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.802 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.802 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.802 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.802 02:01:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.802 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.802 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.802 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.802 02:01:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.802 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.802 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.802 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.802 02:01:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.802 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.802 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.802 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.802 02:01:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.802 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.802 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.802 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.802 02:01:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.802 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.802 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.802 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.802 02:01:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.802 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.802 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.802 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.802 02:01:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.802 02:01:39 -- setup/common.sh@33 -- # echo 512 00:04:24.802 02:01:39 -- setup/common.sh@33 -- # return 0 00:04:24.802 02:01:39 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:24.802 02:01:39 -- setup/hugepages.sh@112 -- # get_nodes 00:04:24.802 02:01:39 -- setup/hugepages.sh@27 -- # local node 00:04:24.802 02:01:39 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:24.802 02:01:39 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:24.802 02:01:39 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:24.802 02:01:39 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:24.802 02:01:39 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:24.802 02:01:39 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:24.802 02:01:39 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:24.802 02:01:39 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:24.802 02:01:39 -- setup/common.sh@18 -- # local node=0 00:04:24.802 02:01:39 -- setup/common.sh@19 -- # local var val 00:04:24.802 02:01:39 -- setup/common.sh@20 -- # local mem_f mem 00:04:24.802 02:01:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.802 02:01:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:24.802 02:01:39 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:24.802 02:01:39 -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.802 02:01:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.802 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.803 02:01:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8617404 kB' 'MemUsed: 3624572 kB' 'SwapCached: 0 kB' 'Active: 888268 kB' 'Inactive: 1375064 kB' 'Active(anon): 129048 kB' 'Inactive(anon): 0 kB' 'Active(file): 759220 kB' 'Inactive(file): 1375064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'FilePages: 2144748 kB' 'Mapped: 48784 kB' 'AnonPages: 120216 kB' 'Shmem: 10464 kB' 'KernelStack: 6352 kB' 'PageTables: 4316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70312 kB' 'Slab: 144764 kB' 'SReclaimable: 70312 kB' 'SUnreclaim: 74452 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:24.803 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.803 02:01:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.803 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.803 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.803 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.803 02:01:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.803 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.803 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.803 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.803 02:01:39 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.803 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.803 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.803 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.803 02:01:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.803 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.803 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.803 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.803 02:01:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.803 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.803 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.803 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.803 02:01:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.803 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.803 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.803 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.803 02:01:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.803 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.803 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.803 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.803 02:01:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.803 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.803 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.803 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.803 02:01:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.803 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.803 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.803 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.803 02:01:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.803 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.803 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.803 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.803 02:01:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.803 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.803 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.803 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.803 02:01:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.803 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.803 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.803 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.803 02:01:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.803 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.803 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.803 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.803 02:01:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.803 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.803 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.803 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.803 02:01:39 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.803 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.803 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.803 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.803 02:01:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.803 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.803 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.803 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.803 02:01:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.803 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.803 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.803 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.803 02:01:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.803 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.803 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.803 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.803 02:01:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.803 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.803 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.803 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.803 02:01:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.803 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.803 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.803 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.803 02:01:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.803 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.803 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.803 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.803 02:01:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.803 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.803 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.803 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.803 02:01:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.803 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.803 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.803 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.803 02:01:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.803 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.803 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.803 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.803 02:01:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.803 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.803 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.803 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.803 02:01:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.803 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.803 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.803 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.803 02:01:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.803 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.803 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.803 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.803 02:01:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.803 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.803 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.803 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.803 02:01:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.803 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.803 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.803 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.803 02:01:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.803 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.803 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.803 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.803 02:01:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.803 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.803 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.803 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.803 02:01:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.803 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.803 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.803 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.803 02:01:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.803 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.803 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.803 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.803 02:01:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.803 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.803 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.803 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.804 02:01:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.804 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.804 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.804 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.804 02:01:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.804 02:01:39 -- setup/common.sh@32 -- # continue 00:04:24.804 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.804 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.804 02:01:39 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.804 02:01:39 -- setup/common.sh@33 -- # echo 0 00:04:24.804 02:01:39 -- setup/common.sh@33 -- # return 0 00:04:24.804 02:01:39 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:24.804 02:01:39 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:24.804 02:01:39 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:24.804 02:01:39 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:24.804 node0=512 expecting 512 00:04:24.804 02:01:39 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:24.804 02:01:39 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:24.804 00:04:24.804 real 0m0.465s 00:04:24.804 user 0m0.249s 00:04:24.804 sys 0m0.245s 00:04:24.804 02:01:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:24.804 02:01:39 -- common/autotest_common.sh@10 -- # set +x 00:04:24.804 ************************************ 00:04:24.804 END TEST custom_alloc 00:04:24.804 ************************************ 00:04:24.804 02:01:39 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:24.804 02:01:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:24.804 02:01:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:24.804 02:01:39 -- common/autotest_common.sh@10 -- # set +x 00:04:24.804 ************************************ 00:04:24.804 START TEST no_shrink_alloc 00:04:24.804 ************************************ 00:04:24.804 02:01:39 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:04:24.804 02:01:39 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:24.804 02:01:39 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:24.804 02:01:39 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:24.804 02:01:39 -- setup/hugepages.sh@51 -- # shift 00:04:24.804 02:01:39 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:24.804 02:01:39 -- setup/hugepages.sh@52 -- # local node_ids 00:04:24.804 02:01:39 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:24.804 02:01:39 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:24.804 02:01:39 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:24.804 02:01:39 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:24.804 02:01:39 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:24.804 02:01:39 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:24.804 02:01:39 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:24.804 02:01:39 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:24.804 02:01:39 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:24.804 02:01:39 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:24.804 02:01:39 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:24.804 02:01:39 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:24.804 02:01:39 -- setup/hugepages.sh@73 -- # return 0 00:04:24.804 02:01:39 -- setup/hugepages.sh@198 -- # setup output 00:04:24.804 02:01:39 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:24.804 02:01:39 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:25.062 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:25.062 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:25.062 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:25.323 02:01:39 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:25.323 02:01:39 -- setup/hugepages.sh@89 -- # local node 00:04:25.323 02:01:39 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:25.323 02:01:39 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:25.323 02:01:39 -- setup/hugepages.sh@92 -- # local surp 00:04:25.323 02:01:39 -- setup/hugepages.sh@93 -- # local resv 00:04:25.323 02:01:39 -- setup/hugepages.sh@94 -- # local anon 00:04:25.323 02:01:39 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:25.323 02:01:39 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:25.323 02:01:39 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:25.323 02:01:39 -- setup/common.sh@18 -- # local node= 00:04:25.323 02:01:39 -- setup/common.sh@19 -- # local var val 00:04:25.323 02:01:39 -- setup/common.sh@20 -- # local mem_f mem 00:04:25.323 02:01:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.323 02:01:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.323 02:01:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.323 02:01:39 -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.323 02:01:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.323 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.323 02:01:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7572500 kB' 'MemAvailable: 9505000 kB' 'Buffers: 2436 kB' 'Cached: 2142312 kB' 'SwapCached: 0 kB' 'Active: 888312 kB' 'Inactive: 1375064 kB' 'Active(anon): 129092 kB' 'Inactive(anon): 0 kB' 'Active(file): 759220 kB' 'Inactive(file): 1375064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 120424 kB' 'Mapped: 48784 kB' 'Shmem: 10464 kB' 'KReclaimable: 70312 kB' 'Slab: 144808 kB' 'SReclaimable: 70312 kB' 'SUnreclaim: 74496 kB' 'KernelStack: 6352 kB' 'PageTables: 4316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 341692 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:25.323 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.323 02:01:39 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.323 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.323 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.323 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.323 02:01:39 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.323 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.323 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.323 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.323 02:01:39 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.323 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.323 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.323 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.323 02:01:39 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.323 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.323 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.323 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.323 02:01:39 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.323 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.323 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.323 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.323 02:01:39 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.323 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.323 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.323 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.323 02:01:39 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.323 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.323 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.323 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.323 02:01:39 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.323 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.323 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.323 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.323 02:01:39 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.323 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.323 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.323 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.323 02:01:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.323 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.323 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.323 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.323 02:01:39 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.323 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.323 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.323 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.323 02:01:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.323 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.324 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.324 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.324 02:01:39 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.324 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.324 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.324 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.324 02:01:39 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.324 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.324 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.324 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.324 02:01:39 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.324 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.324 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.324 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.324 02:01:39 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.324 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.324 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.324 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.324 02:01:39 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.324 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.324 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.324 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.324 02:01:39 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.324 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.324 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.324 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.324 02:01:39 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.324 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.324 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.324 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.324 02:01:39 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.324 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.324 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.324 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.324 02:01:39 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.324 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.324 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.324 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.324 02:01:39 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.324 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.324 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.324 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.324 02:01:39 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.324 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.324 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.324 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.324 02:01:39 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.324 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.324 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.324 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.324 02:01:39 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.324 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.324 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.324 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.324 02:01:39 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.324 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.324 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.324 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.324 02:01:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.324 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.324 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.324 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.324 02:01:39 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.324 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.324 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.324 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.324 02:01:39 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.324 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.324 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.324 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.324 02:01:39 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.324 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.324 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.324 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.324 02:01:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.324 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.324 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.324 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.324 02:01:39 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.324 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.324 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.324 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.324 02:01:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.324 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.324 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.324 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.324 02:01:39 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.324 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.324 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.324 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.324 02:01:39 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.324 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.324 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.324 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.324 02:01:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.324 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.324 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.324 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.324 02:01:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.324 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.324 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.324 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.324 02:01:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.324 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.324 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.324 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.324 02:01:39 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.324 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.324 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.324 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.324 02:01:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.324 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.324 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.324 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.324 02:01:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.324 02:01:39 -- setup/common.sh@33 -- # echo 0 00:04:25.324 02:01:39 -- setup/common.sh@33 -- # return 0 00:04:25.324 02:01:39 -- setup/hugepages.sh@97 -- # anon=0 00:04:25.324 02:01:39 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:25.324 02:01:39 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:25.324 02:01:39 -- setup/common.sh@18 -- # local node= 00:04:25.324 02:01:39 -- setup/common.sh@19 -- # local var val 00:04:25.324 02:01:39 -- setup/common.sh@20 -- # local mem_f mem 00:04:25.324 02:01:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.324 02:01:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.324 02:01:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.324 02:01:39 -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.324 02:01:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.324 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.324 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.325 02:01:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7572500 kB' 'MemAvailable: 9505000 kB' 'Buffers: 2436 kB' 'Cached: 2142312 kB' 'SwapCached: 0 kB' 'Active: 888320 kB' 'Inactive: 1375064 kB' 'Active(anon): 129100 kB' 'Inactive(anon): 0 kB' 'Active(file): 759220 kB' 'Inactive(file): 1375064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 120432 kB' 'Mapped: 48784 kB' 'Shmem: 10464 kB' 'KReclaimable: 70312 kB' 'Slab: 144804 kB' 'SReclaimable: 70312 kB' 'SUnreclaim: 74492 kB' 'KernelStack: 6336 kB' 'PageTables: 4268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 341692 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:25.325 02:01:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.325 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.325 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.325 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.325 02:01:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.325 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.325 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.325 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.325 02:01:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.325 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.325 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.325 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.325 02:01:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.325 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.325 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.325 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.325 02:01:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.325 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.325 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.325 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.325 02:01:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.325 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.325 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.325 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.325 02:01:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.325 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.325 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.325 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.325 02:01:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.325 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.325 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.325 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.325 02:01:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.325 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.325 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.325 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.325 02:01:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.325 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.325 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.325 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.325 02:01:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.325 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.325 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.325 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.325 02:01:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.325 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.325 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.325 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.325 02:01:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.325 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.325 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.325 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.325 02:01:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.325 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.325 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.325 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.325 02:01:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.325 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.325 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.325 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.325 02:01:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.325 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.325 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.325 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.325 02:01:39 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.325 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.325 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.325 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.325 02:01:39 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.325 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.325 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.325 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.325 02:01:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.325 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.325 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.325 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.325 02:01:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.325 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.325 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.325 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.325 02:01:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.325 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.325 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.325 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.325 02:01:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.325 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.325 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.325 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.325 02:01:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.325 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.325 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.325 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.325 02:01:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.325 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.325 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.325 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.325 02:01:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.325 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.325 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.325 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.325 02:01:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.325 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.325 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.325 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.325 02:01:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.325 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.325 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.325 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.325 02:01:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.325 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.325 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.325 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.325 02:01:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.325 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.325 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.325 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.325 02:01:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.325 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.325 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.325 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.325 02:01:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.325 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.325 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.325 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.325 02:01:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.325 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.325 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.325 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.325 02:01:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.325 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.325 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.325 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.325 02:01:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.325 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.325 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.325 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.325 02:01:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.325 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.325 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.326 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.326 02:01:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.326 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.326 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.326 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.326 02:01:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.326 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.326 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.326 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.326 02:01:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.326 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.326 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.326 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.326 02:01:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.326 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.326 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.326 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.326 02:01:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.326 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.326 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.326 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.326 02:01:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.326 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.326 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.326 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.326 02:01:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.326 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.326 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.326 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.326 02:01:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.326 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.326 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.326 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.326 02:01:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.326 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.326 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.326 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.326 02:01:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.326 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.326 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.326 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.326 02:01:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.326 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.326 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.326 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.326 02:01:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.326 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.326 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.326 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.326 02:01:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.326 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.326 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.326 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.326 02:01:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.326 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.326 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.326 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.326 02:01:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.326 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.326 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.326 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.326 02:01:39 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.326 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.326 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.326 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.326 02:01:39 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.326 02:01:39 -- setup/common.sh@33 -- # echo 0 00:04:25.326 02:01:39 -- setup/common.sh@33 -- # return 0 00:04:25.326 02:01:39 -- setup/hugepages.sh@99 -- # surp=0 00:04:25.326 02:01:39 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:25.326 02:01:39 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:25.326 02:01:39 -- setup/common.sh@18 -- # local node= 00:04:25.326 02:01:39 -- setup/common.sh@19 -- # local var val 00:04:25.326 02:01:39 -- setup/common.sh@20 -- # local mem_f mem 00:04:25.326 02:01:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.326 02:01:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.326 02:01:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.326 02:01:39 -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.326 02:01:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.326 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.326 02:01:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7572452 kB' 'MemAvailable: 9504952 kB' 'Buffers: 2436 kB' 'Cached: 2142312 kB' 'SwapCached: 0 kB' 'Active: 888340 kB' 'Inactive: 1375064 kB' 'Active(anon): 129120 kB' 'Inactive(anon): 0 kB' 'Active(file): 759220 kB' 'Inactive(file): 1375064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 120508 kB' 'Mapped: 48784 kB' 'Shmem: 10464 kB' 'KReclaimable: 70312 kB' 'Slab: 144796 kB' 'SReclaimable: 70312 kB' 'SUnreclaim: 74484 kB' 'KernelStack: 6368 kB' 'PageTables: 4368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 341692 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:25.326 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.326 02:01:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.326 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.326 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.326 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.326 02:01:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.326 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.326 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.326 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.326 02:01:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.326 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.326 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.326 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.326 02:01:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.326 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.326 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.326 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.326 02:01:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.326 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.326 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.326 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.326 02:01:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.326 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.326 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.326 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.326 02:01:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.326 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.326 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.326 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.326 02:01:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.326 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.326 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.326 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.326 02:01:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.326 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.326 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.326 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.326 02:01:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.326 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.326 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.326 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.327 02:01:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.327 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.327 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.327 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.327 02:01:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.327 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.327 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.327 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.327 02:01:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.327 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.327 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.327 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.327 02:01:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.327 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.327 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.327 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.327 02:01:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.327 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.327 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.327 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.327 02:01:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.327 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.327 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.327 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.327 02:01:39 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.327 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.327 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.327 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.327 02:01:39 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.327 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.327 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.327 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.327 02:01:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.327 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.327 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.327 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.327 02:01:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.327 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.327 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.327 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.327 02:01:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.327 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.327 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.327 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.327 02:01:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.327 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.327 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.327 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.327 02:01:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.327 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.327 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.327 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.327 02:01:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.327 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.327 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.327 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.327 02:01:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.327 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.327 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.327 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.327 02:01:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.327 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.327 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.327 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.327 02:01:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.327 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.327 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.327 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.327 02:01:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.327 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.327 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.327 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.327 02:01:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.327 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.327 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.327 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.327 02:01:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.327 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.327 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.327 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.327 02:01:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.327 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.327 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.327 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.327 02:01:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.327 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.327 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.327 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.327 02:01:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.327 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.327 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.327 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.327 02:01:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.327 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.327 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.327 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.327 02:01:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.327 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.327 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.327 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.327 02:01:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.327 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.327 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.327 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.327 02:01:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.327 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.327 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.327 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.327 02:01:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.327 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.327 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.327 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.327 02:01:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.327 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.327 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.327 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.327 02:01:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.327 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.327 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.327 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.327 02:01:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.327 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.327 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.327 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.327 02:01:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.327 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.327 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.327 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.327 02:01:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.327 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.327 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.327 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.327 02:01:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.327 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.327 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.327 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.327 02:01:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.327 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.327 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.328 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.328 02:01:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.328 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.328 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.328 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.328 02:01:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.328 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.328 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.328 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.328 02:01:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.328 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.328 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.328 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.328 02:01:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.328 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.328 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.328 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.328 02:01:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.328 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.328 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.328 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.328 02:01:39 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.328 02:01:39 -- setup/common.sh@33 -- # echo 0 00:04:25.328 02:01:39 -- setup/common.sh@33 -- # return 0 00:04:25.328 02:01:39 -- setup/hugepages.sh@100 -- # resv=0 00:04:25.328 nr_hugepages=1024 00:04:25.328 02:01:39 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:25.328 resv_hugepages=0 00:04:25.328 02:01:39 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:25.328 surplus_hugepages=0 00:04:25.328 02:01:39 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:25.328 anon_hugepages=0 00:04:25.328 02:01:39 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:25.328 02:01:39 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:25.328 02:01:39 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:25.328 02:01:39 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:25.328 02:01:39 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:25.328 02:01:39 -- setup/common.sh@18 -- # local node= 00:04:25.328 02:01:39 -- setup/common.sh@19 -- # local var val 00:04:25.328 02:01:39 -- setup/common.sh@20 -- # local mem_f mem 00:04:25.328 02:01:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.328 02:01:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.328 02:01:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.328 02:01:39 -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.328 02:01:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.328 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.328 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.328 02:01:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7572452 kB' 'MemAvailable: 9504952 kB' 'Buffers: 2436 kB' 'Cached: 2142312 kB' 'SwapCached: 0 kB' 'Active: 888296 kB' 'Inactive: 1375064 kB' 'Active(anon): 129076 kB' 'Inactive(anon): 0 kB' 'Active(file): 759220 kB' 'Inactive(file): 1375064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 120448 kB' 'Mapped: 48784 kB' 'Shmem: 10464 kB' 'KReclaimable: 70312 kB' 'Slab: 144788 kB' 'SReclaimable: 70312 kB' 'SUnreclaim: 74476 kB' 'KernelStack: 6352 kB' 'PageTables: 4316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 341692 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:25.328 02:01:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.328 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.328 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.328 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.328 02:01:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.328 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.328 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.328 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.328 02:01:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.328 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.328 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.328 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.328 02:01:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.328 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.328 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.328 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.328 02:01:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.328 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.328 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.328 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.328 02:01:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.328 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.328 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.328 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.328 02:01:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.328 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.328 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.328 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.328 02:01:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.328 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.328 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.328 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.328 02:01:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.328 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.328 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.328 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.328 02:01:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.328 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.328 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.328 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.328 02:01:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.328 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.328 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.328 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.328 02:01:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.328 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.328 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.328 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.328 02:01:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.328 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.328 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.328 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.328 02:01:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.328 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.328 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.328 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.329 02:01:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.329 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.329 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.329 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.329 02:01:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.329 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.329 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.329 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.329 02:01:39 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.329 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.329 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.329 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.329 02:01:39 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.329 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.329 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.329 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.329 02:01:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.329 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.329 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.329 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.329 02:01:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.329 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.329 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.329 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.329 02:01:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.329 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.329 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.329 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.329 02:01:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.329 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.329 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.329 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.329 02:01:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.329 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.329 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.329 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.329 02:01:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.329 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.329 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.329 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.329 02:01:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.329 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.329 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.329 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.329 02:01:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.329 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.329 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.329 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.329 02:01:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.329 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.329 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.329 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.329 02:01:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.329 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.329 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.329 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.329 02:01:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.329 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.329 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.329 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.329 02:01:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.329 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.329 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.329 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.329 02:01:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.329 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.329 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.329 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.329 02:01:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.329 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.329 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.329 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.329 02:01:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.329 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.329 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.329 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.329 02:01:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.329 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.329 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.329 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.329 02:01:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.329 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.329 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.329 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.329 02:01:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.329 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.329 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.329 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.329 02:01:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.329 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.329 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.329 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.329 02:01:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.329 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.329 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.329 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.329 02:01:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.329 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.329 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.329 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.329 02:01:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.329 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.329 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.329 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.329 02:01:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.329 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.329 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.329 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.329 02:01:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.329 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.329 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.329 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.329 02:01:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.329 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.329 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.329 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.329 02:01:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.329 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.329 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.329 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.329 02:01:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.329 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.329 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.329 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.329 02:01:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.329 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.329 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.329 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.329 02:01:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.329 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.329 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.329 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.329 02:01:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.329 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.329 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.329 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.329 02:01:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.329 02:01:39 -- setup/common.sh@33 -- # echo 1024 00:04:25.329 02:01:39 -- setup/common.sh@33 -- # return 0 00:04:25.330 02:01:39 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:25.330 02:01:39 -- setup/hugepages.sh@112 -- # get_nodes 00:04:25.330 02:01:39 -- setup/hugepages.sh@27 -- # local node 00:04:25.330 02:01:39 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:25.330 02:01:39 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:25.330 02:01:39 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:25.330 02:01:39 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:25.330 02:01:39 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:25.330 02:01:39 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:25.330 02:01:39 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:25.330 02:01:39 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:25.330 02:01:39 -- setup/common.sh@18 -- # local node=0 00:04:25.330 02:01:39 -- setup/common.sh@19 -- # local var val 00:04:25.330 02:01:39 -- setup/common.sh@20 -- # local mem_f mem 00:04:25.330 02:01:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.330 02:01:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:25.330 02:01:39 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:25.330 02:01:39 -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.330 02:01:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.330 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.330 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.330 02:01:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7572712 kB' 'MemUsed: 4669264 kB' 'SwapCached: 0 kB' 'Active: 888660 kB' 'Inactive: 1375064 kB' 'Active(anon): 129440 kB' 'Inactive(anon): 0 kB' 'Active(file): 759220 kB' 'Inactive(file): 1375064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'FilePages: 2144748 kB' 'Mapped: 48784 kB' 'AnonPages: 120360 kB' 'Shmem: 10464 kB' 'KernelStack: 6336 kB' 'PageTables: 4272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70312 kB' 'Slab: 144788 kB' 'SReclaimable: 70312 kB' 'SUnreclaim: 74476 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:25.330 02:01:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.330 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.330 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.330 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.330 02:01:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.330 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.330 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.330 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.330 02:01:39 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.330 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.330 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.330 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.330 02:01:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.330 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.330 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.330 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.330 02:01:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.330 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.330 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.330 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.330 02:01:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.330 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.330 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.330 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.330 02:01:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.330 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.330 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.330 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.330 02:01:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.330 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.330 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.330 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.330 02:01:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.330 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.330 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.330 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.330 02:01:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.330 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.330 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.330 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.330 02:01:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.330 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.330 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.330 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.330 02:01:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.330 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.330 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.330 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.330 02:01:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.330 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.330 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.330 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.330 02:01:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.330 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.330 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.330 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.330 02:01:39 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.330 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.330 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.330 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.330 02:01:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.330 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.330 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.330 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.330 02:01:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.330 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.330 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.330 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.330 02:01:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.330 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.330 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.330 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.330 02:01:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.330 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.330 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.330 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.330 02:01:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.330 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.330 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.330 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.330 02:01:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.330 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.330 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.330 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.330 02:01:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.330 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.330 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.330 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.330 02:01:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.330 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.330 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.330 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.330 02:01:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.330 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.330 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.330 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.330 02:01:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.330 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.330 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.330 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.330 02:01:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.330 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.330 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.330 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.330 02:01:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.330 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.330 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.330 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.330 02:01:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.331 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.331 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.331 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.331 02:01:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.331 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.331 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.331 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.331 02:01:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.331 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.331 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.331 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.331 02:01:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.331 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.331 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.331 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.331 02:01:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.331 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.331 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.331 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.331 02:01:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.331 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.331 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.331 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.331 02:01:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.331 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.331 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.331 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.331 02:01:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.331 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.331 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.331 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.331 02:01:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.331 02:01:39 -- setup/common.sh@32 -- # continue 00:04:25.331 02:01:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.331 02:01:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.331 02:01:39 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.331 02:01:39 -- setup/common.sh@33 -- # echo 0 00:04:25.331 02:01:39 -- setup/common.sh@33 -- # return 0 00:04:25.331 02:01:39 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:25.331 02:01:39 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:25.331 02:01:39 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:25.331 02:01:39 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:25.331 node0=1024 expecting 1024 00:04:25.331 02:01:39 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:25.331 02:01:39 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:25.331 02:01:39 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:25.331 02:01:39 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:25.331 02:01:39 -- setup/hugepages.sh@202 -- # setup output 00:04:25.331 02:01:39 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:25.331 02:01:39 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:25.592 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:25.592 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:25.592 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:25.592 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:25.592 02:01:40 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:25.592 02:01:40 -- setup/hugepages.sh@89 -- # local node 00:04:25.592 02:01:40 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:25.592 02:01:40 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:25.592 02:01:40 -- setup/hugepages.sh@92 -- # local surp 00:04:25.592 02:01:40 -- setup/hugepages.sh@93 -- # local resv 00:04:25.592 02:01:40 -- setup/hugepages.sh@94 -- # local anon 00:04:25.592 02:01:40 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:25.592 02:01:40 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:25.592 02:01:40 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:25.592 02:01:40 -- setup/common.sh@18 -- # local node= 00:04:25.592 02:01:40 -- setup/common.sh@19 -- # local var val 00:04:25.592 02:01:40 -- setup/common.sh@20 -- # local mem_f mem 00:04:25.592 02:01:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.592 02:01:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.592 02:01:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.592 02:01:40 -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.593 02:01:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.593 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.593 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.593 02:01:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7569860 kB' 'MemAvailable: 9502364 kB' 'Buffers: 2436 kB' 'Cached: 2142316 kB' 'SwapCached: 0 kB' 'Active: 888636 kB' 'Inactive: 1375068 kB' 'Active(anon): 129416 kB' 'Inactive(anon): 0 kB' 'Active(file): 759220 kB' 'Inactive(file): 1375068 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 120784 kB' 'Mapped: 48736 kB' 'Shmem: 10464 kB' 'KReclaimable: 70312 kB' 'Slab: 144792 kB' 'SReclaimable: 70312 kB' 'SUnreclaim: 74480 kB' 'KernelStack: 6376 kB' 'PageTables: 4504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 341820 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:25.593 02:01:40 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.593 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.593 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.593 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.593 02:01:40 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.593 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.593 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.593 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.593 02:01:40 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.593 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.593 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.593 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.593 02:01:40 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.593 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.593 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.593 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.593 02:01:40 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.593 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.593 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.593 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.593 02:01:40 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.593 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.593 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.593 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.593 02:01:40 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.593 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.593 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.593 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.593 02:01:40 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.593 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.593 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.593 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.593 02:01:40 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.593 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.593 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.593 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.593 02:01:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.593 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.593 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.593 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.593 02:01:40 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.593 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.593 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.593 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.593 02:01:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.593 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.593 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.593 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.593 02:01:40 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.593 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.593 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.593 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.593 02:01:40 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.593 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.593 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.593 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.593 02:01:40 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.593 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.593 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.593 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.593 02:01:40 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.593 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.593 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.593 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.593 02:01:40 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.593 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.593 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.593 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.593 02:01:40 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.593 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.593 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.593 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.593 02:01:40 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.593 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.593 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.593 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.593 02:01:40 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.593 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.593 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.593 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.593 02:01:40 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.593 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.593 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.593 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.593 02:01:40 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.593 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.593 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.593 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.593 02:01:40 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.593 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.593 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.593 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.593 02:01:40 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.593 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.593 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.593 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.593 02:01:40 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.593 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.593 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.593 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.593 02:01:40 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.593 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.593 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.593 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.593 02:01:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.593 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.593 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.593 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.593 02:01:40 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.593 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.593 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.593 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.593 02:01:40 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.593 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.593 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.593 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.593 02:01:40 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.593 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.593 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.593 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.593 02:01:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.593 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.593 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.594 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.594 02:01:40 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.594 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.594 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.594 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.594 02:01:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.594 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.594 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.594 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.594 02:01:40 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.594 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.594 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.594 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.594 02:01:40 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.594 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.594 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.594 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.594 02:01:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.594 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.594 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.594 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.594 02:01:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.594 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.594 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.594 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.594 02:01:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.594 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.594 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.594 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.594 02:01:40 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.594 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.594 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.594 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.594 02:01:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.594 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.594 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.594 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.594 02:01:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.594 02:01:40 -- setup/common.sh@33 -- # echo 0 00:04:25.594 02:01:40 -- setup/common.sh@33 -- # return 0 00:04:25.594 02:01:40 -- setup/hugepages.sh@97 -- # anon=0 00:04:25.594 02:01:40 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:25.594 02:01:40 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:25.594 02:01:40 -- setup/common.sh@18 -- # local node= 00:04:25.594 02:01:40 -- setup/common.sh@19 -- # local var val 00:04:25.594 02:01:40 -- setup/common.sh@20 -- # local mem_f mem 00:04:25.594 02:01:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.594 02:01:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.594 02:01:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.594 02:01:40 -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.594 02:01:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.594 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.594 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.594 02:01:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7569860 kB' 'MemAvailable: 9502364 kB' 'Buffers: 2436 kB' 'Cached: 2142316 kB' 'SwapCached: 0 kB' 'Active: 888352 kB' 'Inactive: 1375068 kB' 'Active(anon): 129132 kB' 'Inactive(anon): 0 kB' 'Active(file): 759220 kB' 'Inactive(file): 1375068 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 120532 kB' 'Mapped: 48784 kB' 'Shmem: 10464 kB' 'KReclaimable: 70312 kB' 'Slab: 144784 kB' 'SReclaimable: 70312 kB' 'SUnreclaim: 74472 kB' 'KernelStack: 6368 kB' 'PageTables: 4364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 341820 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:25.594 02:01:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.594 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.594 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.594 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.594 02:01:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.594 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.594 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.594 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.594 02:01:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.594 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.594 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.594 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.594 02:01:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.594 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.594 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.594 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.594 02:01:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.594 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.594 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.594 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.594 02:01:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.594 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.594 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.594 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.594 02:01:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.594 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.594 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.594 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.594 02:01:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.594 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.594 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.594 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.594 02:01:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.594 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.594 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.594 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.594 02:01:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.594 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.594 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.594 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.594 02:01:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.594 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.594 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.594 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.594 02:01:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.594 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.594 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.594 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.594 02:01:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.594 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.594 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.594 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.594 02:01:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.594 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.594 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.594 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.594 02:01:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.594 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.594 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.594 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.594 02:01:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.594 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.594 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.594 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.594 02:01:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.594 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.594 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.594 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.594 02:01:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.594 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.594 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.594 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.595 02:01:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.595 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.595 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.595 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.595 02:01:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.595 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.595 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.595 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.595 02:01:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.595 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.595 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.595 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.595 02:01:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.595 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.595 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.595 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.595 02:01:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.595 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.595 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.595 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.595 02:01:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.595 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.595 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.595 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.595 02:01:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.595 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.595 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.595 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.595 02:01:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.595 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.595 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.595 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.595 02:01:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.595 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.595 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.595 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.595 02:01:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.595 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.595 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.595 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.595 02:01:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.595 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.595 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.595 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.595 02:01:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.595 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.595 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.595 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.595 02:01:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.595 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.595 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.595 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.595 02:01:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.595 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.595 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.595 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.595 02:01:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.595 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.595 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.595 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.595 02:01:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.595 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.595 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.595 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.595 02:01:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.595 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.595 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.595 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.595 02:01:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.595 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.595 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.595 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.595 02:01:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.595 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.595 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.595 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.595 02:01:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.595 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.595 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.595 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.595 02:01:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.595 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.595 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.595 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.595 02:01:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.595 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.595 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.595 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.595 02:01:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.595 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.595 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.595 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.595 02:01:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.595 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.595 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.595 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.595 02:01:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.595 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.595 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.595 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.595 02:01:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.595 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.595 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.595 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.595 02:01:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.595 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.595 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.595 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.595 02:01:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.595 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.595 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.595 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.595 02:01:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.595 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.595 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.595 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.595 02:01:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.595 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.595 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.595 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.595 02:01:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.595 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.595 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.595 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.595 02:01:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.595 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.595 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.595 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.595 02:01:40 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.595 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.595 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.595 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.595 02:01:40 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.595 02:01:40 -- setup/common.sh@33 -- # echo 0 00:04:25.595 02:01:40 -- setup/common.sh@33 -- # return 0 00:04:25.595 02:01:40 -- setup/hugepages.sh@99 -- # surp=0 00:04:25.595 02:01:40 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:25.595 02:01:40 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:25.595 02:01:40 -- setup/common.sh@18 -- # local node= 00:04:25.595 02:01:40 -- setup/common.sh@19 -- # local var val 00:04:25.595 02:01:40 -- setup/common.sh@20 -- # local mem_f mem 00:04:25.595 02:01:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.595 02:01:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.596 02:01:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.596 02:01:40 -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.596 02:01:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.596 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.596 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.596 02:01:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7569860 kB' 'MemAvailable: 9502364 kB' 'Buffers: 2436 kB' 'Cached: 2142316 kB' 'SwapCached: 0 kB' 'Active: 888344 kB' 'Inactive: 1375068 kB' 'Active(anon): 129124 kB' 'Inactive(anon): 0 kB' 'Active(file): 759220 kB' 'Inactive(file): 1375068 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 120276 kB' 'Mapped: 48784 kB' 'Shmem: 10464 kB' 'KReclaimable: 70312 kB' 'Slab: 144784 kB' 'SReclaimable: 70312 kB' 'SUnreclaim: 74472 kB' 'KernelStack: 6368 kB' 'PageTables: 4364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 341820 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:25.596 02:01:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.596 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.596 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.596 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.596 02:01:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.596 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.596 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.596 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.596 02:01:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.596 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.596 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.596 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.596 02:01:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.596 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.596 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.596 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.596 02:01:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.596 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.596 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.596 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.596 02:01:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.596 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.596 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.596 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.596 02:01:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.596 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.596 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.596 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.596 02:01:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.596 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.596 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.596 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.596 02:01:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.596 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.596 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.596 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.596 02:01:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.596 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.596 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.596 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.596 02:01:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.596 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.596 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.596 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.596 02:01:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.596 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.596 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.596 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.596 02:01:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.596 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.596 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.596 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.596 02:01:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.596 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.596 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.596 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.596 02:01:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.596 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.596 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.596 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.596 02:01:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.596 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.596 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.596 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.596 02:01:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.596 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.596 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.596 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.596 02:01:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.596 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.596 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.596 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.596 02:01:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.596 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.596 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.596 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.596 02:01:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.596 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.596 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.596 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.596 02:01:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.596 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.596 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.596 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.596 02:01:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.596 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.596 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.596 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.596 02:01:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.596 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.596 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.596 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.596 02:01:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.596 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.596 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.596 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.596 02:01:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.596 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.857 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.857 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.857 02:01:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.857 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.857 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.857 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.857 02:01:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.857 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.857 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.857 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.857 02:01:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.857 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.857 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.857 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.857 02:01:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.857 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.857 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.857 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.857 02:01:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.857 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.857 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.857 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.857 02:01:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.857 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.857 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.857 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.857 02:01:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.857 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.857 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.857 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.858 02:01:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.858 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.858 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.858 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.858 02:01:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.858 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.858 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.858 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.858 02:01:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.858 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.858 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.858 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.858 02:01:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.858 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.858 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.858 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.858 02:01:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.858 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.858 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.858 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.858 02:01:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.858 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.858 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.858 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.858 02:01:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.858 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.858 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.858 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.858 02:01:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.858 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.858 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.858 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.858 02:01:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.858 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.858 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.858 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.858 02:01:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.858 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.858 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.858 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.858 02:01:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.858 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.858 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.858 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.858 02:01:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.858 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.858 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.858 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.858 02:01:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.858 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.858 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.858 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.858 02:01:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.858 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.858 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.858 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.858 02:01:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.858 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.858 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.858 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.858 02:01:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.858 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.858 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.858 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.858 02:01:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.858 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.858 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.858 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.858 02:01:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.858 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.858 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.858 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.858 02:01:40 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.858 02:01:40 -- setup/common.sh@33 -- # echo 0 00:04:25.858 02:01:40 -- setup/common.sh@33 -- # return 0 00:04:25.858 02:01:40 -- setup/hugepages.sh@100 -- # resv=0 00:04:25.858 nr_hugepages=1024 00:04:25.858 02:01:40 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:25.858 resv_hugepages=0 00:04:25.858 02:01:40 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:25.858 surplus_hugepages=0 00:04:25.858 02:01:40 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:25.858 anon_hugepages=0 00:04:25.858 02:01:40 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:25.858 02:01:40 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:25.858 02:01:40 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:25.858 02:01:40 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:25.858 02:01:40 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:25.858 02:01:40 -- setup/common.sh@18 -- # local node= 00:04:25.858 02:01:40 -- setup/common.sh@19 -- # local var val 00:04:25.858 02:01:40 -- setup/common.sh@20 -- # local mem_f mem 00:04:25.858 02:01:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.858 02:01:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.858 02:01:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.858 02:01:40 -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.858 02:01:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.858 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.858 02:01:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7569860 kB' 'MemAvailable: 9502364 kB' 'Buffers: 2436 kB' 'Cached: 2142316 kB' 'SwapCached: 0 kB' 'Active: 888300 kB' 'Inactive: 1375068 kB' 'Active(anon): 129080 kB' 'Inactive(anon): 0 kB' 'Active(file): 759220 kB' 'Inactive(file): 1375068 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 120492 kB' 'Mapped: 48784 kB' 'Shmem: 10464 kB' 'KReclaimable: 70312 kB' 'Slab: 144780 kB' 'SReclaimable: 70312 kB' 'SUnreclaim: 74468 kB' 'KernelStack: 6352 kB' 'PageTables: 4316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 341820 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:25.858 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.858 02:01:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.858 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.858 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.858 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.858 02:01:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.858 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.858 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.858 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.858 02:01:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.858 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.858 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.858 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.858 02:01:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.858 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.858 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.858 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.858 02:01:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.858 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.858 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.858 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.858 02:01:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.858 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.858 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.858 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.859 02:01:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.859 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.859 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.859 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.859 02:01:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.859 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.859 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.859 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.859 02:01:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.859 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.859 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.859 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.859 02:01:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.859 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.859 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.859 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.859 02:01:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.859 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.859 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.859 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.859 02:01:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.859 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.859 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.859 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.859 02:01:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.859 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.859 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.859 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.859 02:01:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.859 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.859 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.859 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.859 02:01:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.859 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.859 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.859 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.859 02:01:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.859 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.859 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.859 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.859 02:01:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.859 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.859 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.859 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.859 02:01:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.859 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.859 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.859 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.859 02:01:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.859 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.859 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.859 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.859 02:01:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.859 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.859 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.859 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.859 02:01:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.859 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.859 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.859 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.859 02:01:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.859 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.859 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.859 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.859 02:01:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.859 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.859 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.859 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.859 02:01:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.859 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.859 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.859 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.859 02:01:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.859 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.859 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.859 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.859 02:01:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.859 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.859 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.859 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.859 02:01:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.859 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.859 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.859 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.859 02:01:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.859 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.859 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.859 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.859 02:01:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.859 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.859 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.859 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.859 02:01:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.859 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.859 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.859 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.859 02:01:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.859 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.859 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.859 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.859 02:01:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.859 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.859 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.859 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.859 02:01:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.859 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.859 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.859 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.859 02:01:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.859 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.859 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.859 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.859 02:01:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.859 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.859 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.859 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.859 02:01:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.859 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.859 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.859 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.859 02:01:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.860 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.860 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.860 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.860 02:01:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.860 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.860 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.860 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.860 02:01:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.860 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.860 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.860 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.860 02:01:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.860 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.860 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.860 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.860 02:01:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.860 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.860 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.860 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.860 02:01:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.860 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.860 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.860 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.860 02:01:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.860 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.860 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.860 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.860 02:01:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.860 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.860 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.860 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.860 02:01:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.860 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.860 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.860 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.860 02:01:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.860 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.860 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.860 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.860 02:01:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.860 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.860 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.860 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.860 02:01:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.860 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.860 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.860 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.860 02:01:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.860 02:01:40 -- setup/common.sh@33 -- # echo 1024 00:04:25.860 02:01:40 -- setup/common.sh@33 -- # return 0 00:04:25.860 02:01:40 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:25.860 02:01:40 -- setup/hugepages.sh@112 -- # get_nodes 00:04:25.860 02:01:40 -- setup/hugepages.sh@27 -- # local node 00:04:25.860 02:01:40 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:25.860 02:01:40 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:25.860 02:01:40 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:25.860 02:01:40 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:25.860 02:01:40 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:25.860 02:01:40 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:25.860 02:01:40 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:25.860 02:01:40 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:25.860 02:01:40 -- setup/common.sh@18 -- # local node=0 00:04:25.860 02:01:40 -- setup/common.sh@19 -- # local var val 00:04:25.860 02:01:40 -- setup/common.sh@20 -- # local mem_f mem 00:04:25.860 02:01:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.860 02:01:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:25.860 02:01:40 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:25.860 02:01:40 -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.860 02:01:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.860 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.860 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.860 02:01:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7569860 kB' 'MemUsed: 4672116 kB' 'SwapCached: 0 kB' 'Active: 888252 kB' 'Inactive: 1375068 kB' 'Active(anon): 129032 kB' 'Inactive(anon): 0 kB' 'Active(file): 759220 kB' 'Inactive(file): 1375068 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'FilePages: 2144752 kB' 'Mapped: 48784 kB' 'AnonPages: 120396 kB' 'Shmem: 10464 kB' 'KernelStack: 6320 kB' 'PageTables: 4216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70312 kB' 'Slab: 144776 kB' 'SReclaimable: 70312 kB' 'SUnreclaim: 74464 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:25.860 02:01:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.860 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.860 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.860 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.860 02:01:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.860 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.860 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.860 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.860 02:01:40 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.860 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.860 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.860 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.860 02:01:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.860 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.860 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.860 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.860 02:01:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.860 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.860 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.860 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.860 02:01:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.860 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.860 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.860 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.860 02:01:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.860 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.860 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.860 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.860 02:01:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.860 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.860 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.860 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.860 02:01:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.860 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.860 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.860 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.860 02:01:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.860 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.860 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.860 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.860 02:01:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.860 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.860 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.860 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.860 02:01:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.860 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.860 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.860 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.860 02:01:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.860 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.860 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.860 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.860 02:01:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.860 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.860 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.860 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.861 02:01:40 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.861 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.861 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.861 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.861 02:01:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.861 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.861 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.861 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.861 02:01:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.861 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.861 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.861 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.861 02:01:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.861 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.861 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.861 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.861 02:01:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.861 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.861 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.861 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.861 02:01:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.861 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.861 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.861 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.861 02:01:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.861 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.861 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.861 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.861 02:01:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.861 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.861 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.861 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.861 02:01:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.861 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.861 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.861 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.861 02:01:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.861 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.861 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.861 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.861 02:01:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.861 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.861 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.861 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.861 02:01:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.861 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.861 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.861 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.861 02:01:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.861 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.861 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.861 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.861 02:01:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.861 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.861 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.861 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.861 02:01:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.861 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.861 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.861 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.861 02:01:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.861 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.861 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.861 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.861 02:01:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.861 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.861 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.861 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.861 02:01:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.861 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.861 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.861 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.861 02:01:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.861 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.861 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.861 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.861 02:01:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.861 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.861 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.861 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.861 02:01:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.861 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.861 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.861 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.861 02:01:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.861 02:01:40 -- setup/common.sh@32 -- # continue 00:04:25.861 02:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.861 02:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.861 02:01:40 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.861 02:01:40 -- setup/common.sh@33 -- # echo 0 00:04:25.861 02:01:40 -- setup/common.sh@33 -- # return 0 00:04:25.861 02:01:40 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:25.861 02:01:40 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:25.861 02:01:40 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:25.861 02:01:40 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:25.861 02:01:40 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:25.861 node0=1024 expecting 1024 00:04:25.861 02:01:40 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:25.861 00:04:25.861 real 0m0.900s 00:04:25.861 user 0m0.488s 00:04:25.861 sys 0m0.456s 00:04:25.861 02:01:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:25.861 02:01:40 -- common/autotest_common.sh@10 -- # set +x 00:04:25.861 ************************************ 00:04:25.861 END TEST no_shrink_alloc 00:04:25.861 ************************************ 00:04:25.861 02:01:40 -- setup/hugepages.sh@217 -- # clear_hp 00:04:25.861 02:01:40 -- setup/hugepages.sh@37 -- # local node hp 00:04:25.861 02:01:40 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:25.861 02:01:40 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:25.861 02:01:40 -- setup/hugepages.sh@41 -- # echo 0 00:04:25.861 02:01:40 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:25.861 02:01:40 -- setup/hugepages.sh@41 -- # echo 0 00:04:25.861 02:01:40 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:25.861 02:01:40 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:25.861 00:04:25.861 real 0m4.031s 00:04:25.861 user 0m2.051s 00:04:25.861 sys 0m2.076s 00:04:25.861 02:01:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:25.861 02:01:40 -- common/autotest_common.sh@10 -- # set +x 00:04:25.861 ************************************ 00:04:25.861 END TEST hugepages 00:04:25.862 ************************************ 00:04:25.862 02:01:40 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:25.862 02:01:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:25.862 02:01:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:25.862 02:01:40 -- common/autotest_common.sh@10 -- # set +x 00:04:25.862 ************************************ 00:04:25.862 START TEST driver 00:04:25.862 ************************************ 00:04:25.862 02:01:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:25.862 * Looking for test storage... 00:04:25.862 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:25.862 02:01:40 -- setup/driver.sh@68 -- # setup reset 00:04:25.862 02:01:40 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:25.862 02:01:40 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:26.426 02:01:40 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:26.426 02:01:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:26.426 02:01:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:26.426 02:01:40 -- common/autotest_common.sh@10 -- # set +x 00:04:26.426 ************************************ 00:04:26.426 START TEST guess_driver 00:04:26.426 ************************************ 00:04:26.426 02:01:40 -- common/autotest_common.sh@1104 -- # guess_driver 00:04:26.426 02:01:40 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:26.426 02:01:40 -- setup/driver.sh@47 -- # local fail=0 00:04:26.426 02:01:40 -- setup/driver.sh@49 -- # pick_driver 00:04:26.426 02:01:40 -- setup/driver.sh@36 -- # vfio 00:04:26.426 02:01:40 -- setup/driver.sh@21 -- # local iommu_grups 00:04:26.426 02:01:40 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:26.426 02:01:40 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:26.426 02:01:40 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:26.426 02:01:40 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:26.426 02:01:40 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:26.426 02:01:40 -- setup/driver.sh@32 -- # return 1 00:04:26.426 02:01:40 -- setup/driver.sh@38 -- # uio 00:04:26.426 02:01:40 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:26.426 02:01:40 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:26.426 02:01:40 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:26.426 02:01:40 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:26.426 02:01:40 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:26.426 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:26.426 02:01:40 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:26.426 02:01:40 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:26.426 02:01:40 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:26.426 Looking for driver=uio_pci_generic 00:04:26.426 02:01:40 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:26.426 02:01:40 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:26.426 02:01:40 -- setup/driver.sh@45 -- # setup output config 00:04:26.426 02:01:40 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:26.426 02:01:40 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:26.990 02:01:41 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:26.990 02:01:41 -- setup/driver.sh@58 -- # continue 00:04:26.990 02:01:41 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:27.247 02:01:41 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:27.247 02:01:41 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:27.247 02:01:41 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:27.247 02:01:41 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:27.247 02:01:41 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:27.247 02:01:41 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:27.247 02:01:41 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:27.247 02:01:41 -- setup/driver.sh@65 -- # setup reset 00:04:27.247 02:01:41 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:27.247 02:01:41 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:27.812 00:04:27.812 real 0m1.333s 00:04:27.812 user 0m0.479s 00:04:27.812 sys 0m0.851s 00:04:27.812 02:01:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:27.812 ************************************ 00:04:27.812 END TEST guess_driver 00:04:27.812 ************************************ 00:04:27.812 02:01:42 -- common/autotest_common.sh@10 -- # set +x 00:04:27.812 00:04:27.812 real 0m1.925s 00:04:27.812 user 0m0.664s 00:04:27.812 sys 0m1.297s 00:04:27.812 02:01:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:27.812 02:01:42 -- common/autotest_common.sh@10 -- # set +x 00:04:27.812 ************************************ 00:04:27.812 END TEST driver 00:04:27.812 ************************************ 00:04:27.812 02:01:42 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:27.812 02:01:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:27.812 02:01:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:27.812 02:01:42 -- common/autotest_common.sh@10 -- # set +x 00:04:27.812 ************************************ 00:04:27.812 START TEST devices 00:04:27.812 ************************************ 00:04:27.812 02:01:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:27.812 * Looking for test storage... 00:04:27.812 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:27.812 02:01:42 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:27.812 02:01:42 -- setup/devices.sh@192 -- # setup reset 00:04:27.812 02:01:42 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:27.812 02:01:42 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:28.742 02:01:43 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:28.742 02:01:43 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:28.742 02:01:43 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:28.742 02:01:43 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:28.742 02:01:43 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:28.742 02:01:43 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:28.742 02:01:43 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:28.742 02:01:43 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:28.742 02:01:43 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:28.742 02:01:43 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:28.742 02:01:43 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:04:28.742 02:01:43 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:04:28.742 02:01:43 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:28.742 02:01:43 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:28.742 02:01:43 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:28.742 02:01:43 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:04:28.742 02:01:43 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:04:28.742 02:01:43 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:28.742 02:01:43 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:28.742 02:01:43 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:28.742 02:01:43 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:04:28.742 02:01:43 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:04:28.742 02:01:43 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:28.742 02:01:43 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:28.742 02:01:43 -- setup/devices.sh@196 -- # blocks=() 00:04:28.742 02:01:43 -- setup/devices.sh@196 -- # declare -a blocks 00:04:28.742 02:01:43 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:28.742 02:01:43 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:28.742 02:01:43 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:28.742 02:01:43 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:28.742 02:01:43 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:28.742 02:01:43 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:28.742 02:01:43 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:04:28.742 02:01:43 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:28.742 02:01:43 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:28.742 02:01:43 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:28.742 02:01:43 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:28.742 No valid GPT data, bailing 00:04:28.742 02:01:43 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:28.742 02:01:43 -- scripts/common.sh@393 -- # pt= 00:04:28.742 02:01:43 -- scripts/common.sh@394 -- # return 1 00:04:28.742 02:01:43 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:28.742 02:01:43 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:28.742 02:01:43 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:28.742 02:01:43 -- setup/common.sh@80 -- # echo 5368709120 00:04:28.742 02:01:43 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:28.742 02:01:43 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:28.742 02:01:43 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:04:28.742 02:01:43 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:28.742 02:01:43 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:28.742 02:01:43 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:28.742 02:01:43 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:04:28.742 02:01:43 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:28.742 02:01:43 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:28.742 02:01:43 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:04:28.742 02:01:43 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:04:28.742 No valid GPT data, bailing 00:04:28.742 02:01:43 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:28.742 02:01:43 -- scripts/common.sh@393 -- # pt= 00:04:28.742 02:01:43 -- scripts/common.sh@394 -- # return 1 00:04:28.742 02:01:43 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:28.742 02:01:43 -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:28.742 02:01:43 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:28.742 02:01:43 -- setup/common.sh@80 -- # echo 4294967296 00:04:28.742 02:01:43 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:28.742 02:01:43 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:28.742 02:01:43 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:04:28.742 02:01:43 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:28.742 02:01:43 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:04:28.743 02:01:43 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:28.743 02:01:43 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:04:28.743 02:01:43 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:28.743 02:01:43 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:04:28.743 02:01:43 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:04:28.743 02:01:43 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:04:28.743 No valid GPT data, bailing 00:04:28.743 02:01:43 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:28.743 02:01:43 -- scripts/common.sh@393 -- # pt= 00:04:28.743 02:01:43 -- scripts/common.sh@394 -- # return 1 00:04:28.743 02:01:43 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:04:28.743 02:01:43 -- setup/common.sh@76 -- # local dev=nvme1n2 00:04:28.743 02:01:43 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:04:28.743 02:01:43 -- setup/common.sh@80 -- # echo 4294967296 00:04:28.743 02:01:43 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:28.743 02:01:43 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:28.743 02:01:43 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:04:28.743 02:01:43 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:28.743 02:01:43 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:04:28.743 02:01:43 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:28.743 02:01:43 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:04:28.743 02:01:43 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:28.743 02:01:43 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:04:28.743 02:01:43 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:04:28.743 02:01:43 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:04:28.743 No valid GPT data, bailing 00:04:28.743 02:01:43 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:28.743 02:01:43 -- scripts/common.sh@393 -- # pt= 00:04:28.743 02:01:43 -- scripts/common.sh@394 -- # return 1 00:04:28.743 02:01:43 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:04:28.743 02:01:43 -- setup/common.sh@76 -- # local dev=nvme1n3 00:04:28.743 02:01:43 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:04:28.743 02:01:43 -- setup/common.sh@80 -- # echo 4294967296 00:04:28.743 02:01:43 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:28.743 02:01:43 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:28.743 02:01:43 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:04:28.743 02:01:43 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:04:28.743 02:01:43 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:28.743 02:01:43 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:28.743 02:01:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:28.743 02:01:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:28.743 02:01:43 -- common/autotest_common.sh@10 -- # set +x 00:04:28.743 ************************************ 00:04:28.743 START TEST nvme_mount 00:04:28.743 ************************************ 00:04:28.743 02:01:43 -- common/autotest_common.sh@1104 -- # nvme_mount 00:04:28.743 02:01:43 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:28.743 02:01:43 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:28.743 02:01:43 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:28.743 02:01:43 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:28.743 02:01:43 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:28.743 02:01:43 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:28.743 02:01:43 -- setup/common.sh@40 -- # local part_no=1 00:04:28.743 02:01:43 -- setup/common.sh@41 -- # local size=1073741824 00:04:28.743 02:01:43 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:28.743 02:01:43 -- setup/common.sh@44 -- # parts=() 00:04:28.743 02:01:43 -- setup/common.sh@44 -- # local parts 00:04:28.743 02:01:43 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:28.743 02:01:43 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:28.743 02:01:43 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:28.743 02:01:43 -- setup/common.sh@46 -- # (( part++ )) 00:04:28.743 02:01:43 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:28.743 02:01:43 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:28.743 02:01:43 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:28.743 02:01:43 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:30.116 Creating new GPT entries in memory. 00:04:30.116 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:30.116 other utilities. 00:04:30.116 02:01:44 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:30.116 02:01:44 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:30.116 02:01:44 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:30.116 02:01:44 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:30.116 02:01:44 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:31.047 Creating new GPT entries in memory. 00:04:31.047 The operation has completed successfully. 00:04:31.047 02:01:45 -- setup/common.sh@57 -- # (( part++ )) 00:04:31.047 02:01:45 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:31.047 02:01:45 -- setup/common.sh@62 -- # wait 53873 00:04:31.047 02:01:45 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:31.047 02:01:45 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:31.047 02:01:45 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:31.047 02:01:45 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:31.047 02:01:45 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:31.047 02:01:45 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:31.047 02:01:45 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:31.047 02:01:45 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:31.047 02:01:45 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:31.047 02:01:45 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:31.047 02:01:45 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:31.047 02:01:45 -- setup/devices.sh@53 -- # local found=0 00:04:31.047 02:01:45 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:31.047 02:01:45 -- setup/devices.sh@56 -- # : 00:04:31.047 02:01:45 -- setup/devices.sh@59 -- # local pci status 00:04:31.047 02:01:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.047 02:01:45 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:31.047 02:01:45 -- setup/devices.sh@47 -- # setup output config 00:04:31.047 02:01:45 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:31.047 02:01:45 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:31.047 02:01:45 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:31.047 02:01:45 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:31.047 02:01:45 -- setup/devices.sh@63 -- # found=1 00:04:31.047 02:01:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.047 02:01:45 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:31.047 02:01:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.305 02:01:45 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:31.305 02:01:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.563 02:01:45 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:31.563 02:01:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.563 02:01:45 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:31.563 02:01:45 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:31.563 02:01:45 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:31.563 02:01:45 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:31.563 02:01:45 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:31.563 02:01:45 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:31.563 02:01:45 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:31.563 02:01:46 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:31.563 02:01:46 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:31.563 02:01:46 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:31.563 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:31.563 02:01:46 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:31.563 02:01:46 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:31.820 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:31.820 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:31.820 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:31.820 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:31.820 02:01:46 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:31.820 02:01:46 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:31.820 02:01:46 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:31.820 02:01:46 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:31.821 02:01:46 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:31.821 02:01:46 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:31.821 02:01:46 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:31.821 02:01:46 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:31.821 02:01:46 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:31.821 02:01:46 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:31.821 02:01:46 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:31.821 02:01:46 -- setup/devices.sh@53 -- # local found=0 00:04:31.821 02:01:46 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:31.821 02:01:46 -- setup/devices.sh@56 -- # : 00:04:31.821 02:01:46 -- setup/devices.sh@59 -- # local pci status 00:04:31.821 02:01:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.821 02:01:46 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:31.821 02:01:46 -- setup/devices.sh@47 -- # setup output config 00:04:31.821 02:01:46 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:31.821 02:01:46 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:32.078 02:01:46 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:32.078 02:01:46 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:32.078 02:01:46 -- setup/devices.sh@63 -- # found=1 00:04:32.078 02:01:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.078 02:01:46 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:32.078 02:01:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.336 02:01:46 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:32.336 02:01:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.336 02:01:46 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:32.336 02:01:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.336 02:01:46 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:32.336 02:01:46 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:32.336 02:01:46 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:32.336 02:01:46 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:32.336 02:01:46 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:32.336 02:01:46 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:32.336 02:01:46 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:04:32.336 02:01:46 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:32.336 02:01:46 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:32.336 02:01:46 -- setup/devices.sh@50 -- # local mount_point= 00:04:32.336 02:01:46 -- setup/devices.sh@51 -- # local test_file= 00:04:32.336 02:01:46 -- setup/devices.sh@53 -- # local found=0 00:04:32.336 02:01:46 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:32.336 02:01:46 -- setup/devices.sh@59 -- # local pci status 00:04:32.336 02:01:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.336 02:01:46 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:32.336 02:01:46 -- setup/devices.sh@47 -- # setup output config 00:04:32.336 02:01:46 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:32.593 02:01:46 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:32.593 02:01:47 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:32.594 02:01:47 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:32.594 02:01:47 -- setup/devices.sh@63 -- # found=1 00:04:32.594 02:01:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.594 02:01:47 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:32.594 02:01:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.851 02:01:47 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:32.851 02:01:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.109 02:01:47 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:33.109 02:01:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.109 02:01:47 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:33.109 02:01:47 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:33.109 02:01:47 -- setup/devices.sh@68 -- # return 0 00:04:33.109 02:01:47 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:33.109 02:01:47 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:33.109 02:01:47 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:33.109 02:01:47 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:33.109 02:01:47 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:33.109 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:33.109 00:04:33.109 real 0m4.249s 00:04:33.109 user 0m0.885s 00:04:33.109 sys 0m1.077s 00:04:33.109 02:01:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.109 02:01:47 -- common/autotest_common.sh@10 -- # set +x 00:04:33.109 ************************************ 00:04:33.109 END TEST nvme_mount 00:04:33.109 ************************************ 00:04:33.109 02:01:47 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:33.109 02:01:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:33.109 02:01:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:33.109 02:01:47 -- common/autotest_common.sh@10 -- # set +x 00:04:33.110 ************************************ 00:04:33.110 START TEST dm_mount 00:04:33.110 ************************************ 00:04:33.110 02:01:47 -- common/autotest_common.sh@1104 -- # dm_mount 00:04:33.110 02:01:47 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:33.110 02:01:47 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:33.110 02:01:47 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:33.110 02:01:47 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:33.110 02:01:47 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:33.110 02:01:47 -- setup/common.sh@40 -- # local part_no=2 00:04:33.110 02:01:47 -- setup/common.sh@41 -- # local size=1073741824 00:04:33.110 02:01:47 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:33.110 02:01:47 -- setup/common.sh@44 -- # parts=() 00:04:33.110 02:01:47 -- setup/common.sh@44 -- # local parts 00:04:33.110 02:01:47 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:33.110 02:01:47 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:33.110 02:01:47 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:33.110 02:01:47 -- setup/common.sh@46 -- # (( part++ )) 00:04:33.110 02:01:47 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:33.110 02:01:47 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:33.110 02:01:47 -- setup/common.sh@46 -- # (( part++ )) 00:04:33.110 02:01:47 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:33.110 02:01:47 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:33.110 02:01:47 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:33.110 02:01:47 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:34.044 Creating new GPT entries in memory. 00:04:34.044 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:34.044 other utilities. 00:04:34.044 02:01:48 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:34.044 02:01:48 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:34.044 02:01:48 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:34.044 02:01:48 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:34.044 02:01:48 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:35.444 Creating new GPT entries in memory. 00:04:35.444 The operation has completed successfully. 00:04:35.444 02:01:49 -- setup/common.sh@57 -- # (( part++ )) 00:04:35.444 02:01:49 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:35.444 02:01:49 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:35.444 02:01:49 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:35.444 02:01:49 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:04:36.399 The operation has completed successfully. 00:04:36.399 02:01:50 -- setup/common.sh@57 -- # (( part++ )) 00:04:36.399 02:01:50 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:36.399 02:01:50 -- setup/common.sh@62 -- # wait 54354 00:04:36.399 02:01:50 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:36.399 02:01:50 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:36.399 02:01:50 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:36.399 02:01:50 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:36.399 02:01:50 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:36.399 02:01:50 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:36.399 02:01:50 -- setup/devices.sh@161 -- # break 00:04:36.399 02:01:50 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:36.399 02:01:50 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:36.399 02:01:50 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:36.399 02:01:50 -- setup/devices.sh@166 -- # dm=dm-0 00:04:36.399 02:01:50 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:36.399 02:01:50 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:36.399 02:01:50 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:36.399 02:01:50 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:36.399 02:01:50 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:36.399 02:01:50 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:36.399 02:01:50 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:36.399 02:01:50 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:36.400 02:01:50 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:36.400 02:01:50 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:36.400 02:01:50 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:36.400 02:01:50 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:36.400 02:01:50 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:36.400 02:01:50 -- setup/devices.sh@53 -- # local found=0 00:04:36.400 02:01:50 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:36.400 02:01:50 -- setup/devices.sh@56 -- # : 00:04:36.400 02:01:50 -- setup/devices.sh@59 -- # local pci status 00:04:36.400 02:01:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.400 02:01:50 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:36.400 02:01:50 -- setup/devices.sh@47 -- # setup output config 00:04:36.400 02:01:50 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:36.400 02:01:50 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:36.400 02:01:50 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:36.400 02:01:50 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:36.400 02:01:50 -- setup/devices.sh@63 -- # found=1 00:04:36.400 02:01:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.400 02:01:50 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:36.400 02:01:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.659 02:01:51 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:36.659 02:01:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.917 02:01:51 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:36.917 02:01:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.917 02:01:51 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:36.917 02:01:51 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:36.917 02:01:51 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:36.917 02:01:51 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:36.917 02:01:51 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:36.917 02:01:51 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:36.917 02:01:51 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:36.917 02:01:51 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:36.917 02:01:51 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:36.917 02:01:51 -- setup/devices.sh@50 -- # local mount_point= 00:04:36.917 02:01:51 -- setup/devices.sh@51 -- # local test_file= 00:04:36.917 02:01:51 -- setup/devices.sh@53 -- # local found=0 00:04:36.917 02:01:51 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:36.917 02:01:51 -- setup/devices.sh@59 -- # local pci status 00:04:36.917 02:01:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.917 02:01:51 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:36.917 02:01:51 -- setup/devices.sh@47 -- # setup output config 00:04:36.917 02:01:51 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:36.917 02:01:51 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:37.175 02:01:51 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:37.175 02:01:51 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:37.175 02:01:51 -- setup/devices.sh@63 -- # found=1 00:04:37.175 02:01:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.175 02:01:51 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:37.175 02:01:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.433 02:01:51 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:37.433 02:01:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.433 02:01:51 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:37.433 02:01:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.433 02:01:52 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:37.433 02:01:52 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:37.433 02:01:52 -- setup/devices.sh@68 -- # return 0 00:04:37.433 02:01:52 -- setup/devices.sh@187 -- # cleanup_dm 00:04:37.433 02:01:52 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:37.433 02:01:52 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:37.433 02:01:52 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:37.691 02:01:52 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:37.691 02:01:52 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:37.691 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:37.691 02:01:52 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:37.691 02:01:52 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:37.691 00:04:37.691 real 0m4.453s 00:04:37.691 user 0m0.633s 00:04:37.691 sys 0m0.770s 00:04:37.691 02:01:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.691 ************************************ 00:04:37.691 END TEST dm_mount 00:04:37.691 02:01:52 -- common/autotest_common.sh@10 -- # set +x 00:04:37.691 ************************************ 00:04:37.691 02:01:52 -- setup/devices.sh@1 -- # cleanup 00:04:37.691 02:01:52 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:37.691 02:01:52 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:37.691 02:01:52 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:37.691 02:01:52 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:37.691 02:01:52 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:37.691 02:01:52 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:37.950 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:37.950 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:37.950 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:37.950 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:37.950 02:01:52 -- setup/devices.sh@12 -- # cleanup_dm 00:04:37.950 02:01:52 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:37.950 02:01:52 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:37.950 02:01:52 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:37.950 02:01:52 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:37.950 02:01:52 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:37.950 02:01:52 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:37.950 ************************************ 00:04:37.950 END TEST devices 00:04:37.950 ************************************ 00:04:37.950 00:04:37.950 real 0m10.086s 00:04:37.950 user 0m2.121s 00:04:37.950 sys 0m2.354s 00:04:37.950 02:01:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.950 02:01:52 -- common/autotest_common.sh@10 -- # set +x 00:04:37.950 00:04:37.950 real 0m20.201s 00:04:37.950 user 0m6.687s 00:04:37.951 sys 0m7.993s 00:04:37.951 02:01:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.951 02:01:52 -- common/autotest_common.sh@10 -- # set +x 00:04:37.951 ************************************ 00:04:37.951 END TEST setup.sh 00:04:37.951 ************************************ 00:04:37.951 02:01:52 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:38.209 Hugepages 00:04:38.209 node hugesize free / total 00:04:38.209 node0 1048576kB 0 / 0 00:04:38.209 node0 2048kB 2048 / 2048 00:04:38.209 00:04:38.209 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:38.209 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:38.209 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:38.467 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:38.467 02:01:52 -- spdk/autotest.sh@141 -- # uname -s 00:04:38.467 02:01:52 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:04:38.467 02:01:52 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:04:38.467 02:01:52 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:39.032 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:39.032 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:39.032 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:39.032 02:01:53 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:40.407 02:01:54 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:40.407 02:01:54 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:40.407 02:01:54 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:04:40.407 02:01:54 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:04:40.407 02:01:54 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:40.407 02:01:54 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:40.407 02:01:54 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:40.407 02:01:54 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:40.407 02:01:54 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:40.407 02:01:54 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:40.407 02:01:54 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:04:40.407 02:01:54 -- common/autotest_common.sh@1521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:40.407 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:40.407 Waiting for block devices as requested 00:04:40.674 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:04:40.674 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:04:40.674 02:01:55 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:04:40.674 02:01:55 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:04:40.674 02:01:55 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:40.674 02:01:55 -- common/autotest_common.sh@1487 -- # grep 0000:00:06.0/nvme/nvme 00:04:40.674 02:01:55 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:04:40.674 02:01:55 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:04:40.674 02:01:55 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:04:40.674 02:01:55 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:40.674 02:01:55 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:04:40.674 02:01:55 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:04:40.674 02:01:55 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:04:40.674 02:01:55 -- common/autotest_common.sh@1530 -- # grep oacs 00:04:40.674 02:01:55 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:04:40.674 02:01:55 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:04:40.674 02:01:55 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:04:40.674 02:01:55 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:04:40.674 02:01:55 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:04:40.674 02:01:55 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:04:40.674 02:01:55 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:04:40.674 02:01:55 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:04:40.674 02:01:55 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:04:40.674 02:01:55 -- common/autotest_common.sh@1542 -- # continue 00:04:40.674 02:01:55 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:04:40.674 02:01:55 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:04:40.674 02:01:55 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:40.674 02:01:55 -- common/autotest_common.sh@1487 -- # grep 0000:00:07.0/nvme/nvme 00:04:40.674 02:01:55 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:04:40.674 02:01:55 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 ]] 00:04:40.674 02:01:55 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:04:40.674 02:01:55 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:40.674 02:01:55 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme1 00:04:40.674 02:01:55 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme1 ]] 00:04:40.674 02:01:55 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme1 00:04:40.674 02:01:55 -- common/autotest_common.sh@1530 -- # grep oacs 00:04:40.674 02:01:55 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:04:40.674 02:01:55 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:04:40.674 02:01:55 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:04:40.674 02:01:55 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:04:40.674 02:01:55 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme1 00:04:40.674 02:01:55 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:04:40.674 02:01:55 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:04:40.674 02:01:55 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:04:40.674 02:01:55 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:04:40.674 02:01:55 -- common/autotest_common.sh@1542 -- # continue 00:04:40.674 02:01:55 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:04:40.674 02:01:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:40.674 02:01:55 -- common/autotest_common.sh@10 -- # set +x 00:04:40.674 02:01:55 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:04:40.674 02:01:55 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:40.674 02:01:55 -- common/autotest_common.sh@10 -- # set +x 00:04:40.674 02:01:55 -- spdk/autotest.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:41.628 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:41.628 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:41.628 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:41.628 02:01:56 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:04:41.628 02:01:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:41.628 02:01:56 -- common/autotest_common.sh@10 -- # set +x 00:04:41.628 02:01:56 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:04:41.628 02:01:56 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:41.628 02:01:56 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:41.628 02:01:56 -- common/autotest_common.sh@1562 -- # bdfs=() 00:04:41.628 02:01:56 -- common/autotest_common.sh@1562 -- # local bdfs 00:04:41.628 02:01:56 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:41.628 02:01:56 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:41.628 02:01:56 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:41.628 02:01:56 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:41.628 02:01:56 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:41.628 02:01:56 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:41.628 02:01:56 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:41.628 02:01:56 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:04:41.628 02:01:56 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:04:41.628 02:01:56 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:04:41.628 02:01:56 -- common/autotest_common.sh@1565 -- # device=0x0010 00:04:41.628 02:01:56 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:41.628 02:01:56 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:04:41.628 02:01:56 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:04:41.628 02:01:56 -- common/autotest_common.sh@1565 -- # device=0x0010 00:04:41.628 02:01:56 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:41.628 02:01:56 -- common/autotest_common.sh@1571 -- # printf '%s\n' 00:04:41.628 02:01:56 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:04:41.628 02:01:56 -- common/autotest_common.sh@1578 -- # return 0 00:04:41.628 02:01:56 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:04:41.628 02:01:56 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:04:41.628 02:01:56 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:04:41.628 02:01:56 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:04:41.628 02:01:56 -- spdk/autotest.sh@173 -- # timing_enter lib 00:04:41.628 02:01:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:41.628 02:01:56 -- common/autotest_common.sh@10 -- # set +x 00:04:41.628 02:01:56 -- spdk/autotest.sh@175 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:41.628 02:01:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:41.628 02:01:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:41.628 02:01:56 -- common/autotest_common.sh@10 -- # set +x 00:04:41.628 ************************************ 00:04:41.628 START TEST env 00:04:41.628 ************************************ 00:04:41.628 02:01:56 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:41.885 * Looking for test storage... 00:04:41.885 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:41.885 02:01:56 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:41.885 02:01:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:41.885 02:01:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:41.885 02:01:56 -- common/autotest_common.sh@10 -- # set +x 00:04:41.885 ************************************ 00:04:41.885 START TEST env_memory 00:04:41.885 ************************************ 00:04:41.885 02:01:56 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:41.886 00:04:41.886 00:04:41.886 CUnit - A unit testing framework for C - Version 2.1-3 00:04:41.886 http://cunit.sourceforge.net/ 00:04:41.886 00:04:41.886 00:04:41.886 Suite: memory 00:04:41.886 Test: alloc and free memory map ...[2024-05-14 02:01:56.336253] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:41.886 passed 00:04:41.886 Test: mem map translation ...[2024-05-14 02:01:56.368675] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:41.886 [2024-05-14 02:01:56.368732] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:41.886 [2024-05-14 02:01:56.368813] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:41.886 [2024-05-14 02:01:56.368830] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:41.886 passed 00:04:41.886 Test: mem map registration ...[2024-05-14 02:01:56.435612] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:41.886 [2024-05-14 02:01:56.435674] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:41.886 passed 00:04:42.143 Test: mem map adjacent registrations ...passed 00:04:42.143 00:04:42.143 Run Summary: Type Total Ran Passed Failed Inactive 00:04:42.143 suites 1 1 n/a 0 0 00:04:42.143 tests 4 4 4 0 0 00:04:42.143 asserts 152 152 152 0 n/a 00:04:42.143 00:04:42.143 Elapsed time = 0.221 seconds 00:04:42.143 00:04:42.143 real 0m0.237s 00:04:42.143 user 0m0.216s 00:04:42.143 sys 0m0.019s 00:04:42.143 02:01:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.143 02:01:56 -- common/autotest_common.sh@10 -- # set +x 00:04:42.143 ************************************ 00:04:42.143 END TEST env_memory 00:04:42.143 ************************************ 00:04:42.143 02:01:56 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:42.143 02:01:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:42.143 02:01:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:42.143 02:01:56 -- common/autotest_common.sh@10 -- # set +x 00:04:42.143 ************************************ 00:04:42.143 START TEST env_vtophys 00:04:42.143 ************************************ 00:04:42.143 02:01:56 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:42.143 EAL: lib.eal log level changed from notice to debug 00:04:42.143 EAL: Detected lcore 0 as core 0 on socket 0 00:04:42.143 EAL: Detected lcore 1 as core 0 on socket 0 00:04:42.143 EAL: Detected lcore 2 as core 0 on socket 0 00:04:42.143 EAL: Detected lcore 3 as core 0 on socket 0 00:04:42.143 EAL: Detected lcore 4 as core 0 on socket 0 00:04:42.143 EAL: Detected lcore 5 as core 0 on socket 0 00:04:42.143 EAL: Detected lcore 6 as core 0 on socket 0 00:04:42.143 EAL: Detected lcore 7 as core 0 on socket 0 00:04:42.143 EAL: Detected lcore 8 as core 0 on socket 0 00:04:42.143 EAL: Detected lcore 9 as core 0 on socket 0 00:04:42.143 EAL: Maximum logical cores by configuration: 128 00:04:42.143 EAL: Detected CPU lcores: 10 00:04:42.143 EAL: Detected NUMA nodes: 1 00:04:42.143 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:42.143 EAL: Detected shared linkage of DPDK 00:04:42.143 EAL: No shared files mode enabled, IPC will be disabled 00:04:42.143 EAL: Selected IOVA mode 'PA' 00:04:42.143 EAL: Probing VFIO support... 00:04:42.143 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:42.143 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:42.143 EAL: Ask a virtual area of 0x2e000 bytes 00:04:42.143 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:42.143 EAL: Setting up physically contiguous memory... 00:04:42.143 EAL: Setting maximum number of open files to 524288 00:04:42.143 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:42.143 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:42.143 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.143 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:42.143 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:42.143 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.143 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:42.144 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:42.144 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.144 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:42.144 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:42.144 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.144 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:42.144 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:42.144 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.144 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:42.144 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:42.144 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.144 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:42.144 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:42.144 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.144 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:42.144 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:42.144 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.144 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:42.144 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:42.144 EAL: Hugepages will be freed exactly as allocated. 00:04:42.144 EAL: No shared files mode enabled, IPC is disabled 00:04:42.144 EAL: No shared files mode enabled, IPC is disabled 00:04:42.144 EAL: TSC frequency is ~2200000 KHz 00:04:42.144 EAL: Main lcore 0 is ready (tid=7f11d2e80a00;cpuset=[0]) 00:04:42.144 EAL: Trying to obtain current memory policy. 00:04:42.144 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.144 EAL: Restoring previous memory policy: 0 00:04:42.144 EAL: request: mp_malloc_sync 00:04:42.144 EAL: No shared files mode enabled, IPC is disabled 00:04:42.144 EAL: Heap on socket 0 was expanded by 2MB 00:04:42.144 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:42.144 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:42.144 EAL: Mem event callback 'spdk:(nil)' registered 00:04:42.144 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:42.144 00:04:42.144 00:04:42.144 CUnit - A unit testing framework for C - Version 2.1-3 00:04:42.144 http://cunit.sourceforge.net/ 00:04:42.144 00:04:42.144 00:04:42.144 Suite: components_suite 00:04:42.144 Test: vtophys_malloc_test ...passed 00:04:42.144 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:42.144 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.144 EAL: Restoring previous memory policy: 4 00:04:42.144 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.144 EAL: request: mp_malloc_sync 00:04:42.144 EAL: No shared files mode enabled, IPC is disabled 00:04:42.144 EAL: Heap on socket 0 was expanded by 4MB 00:04:42.144 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.144 EAL: request: mp_malloc_sync 00:04:42.144 EAL: No shared files mode enabled, IPC is disabled 00:04:42.144 EAL: Heap on socket 0 was shrunk by 4MB 00:04:42.144 EAL: Trying to obtain current memory policy. 00:04:42.144 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.144 EAL: Restoring previous memory policy: 4 00:04:42.144 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.144 EAL: request: mp_malloc_sync 00:04:42.144 EAL: No shared files mode enabled, IPC is disabled 00:04:42.144 EAL: Heap on socket 0 was expanded by 6MB 00:04:42.144 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.144 EAL: request: mp_malloc_sync 00:04:42.144 EAL: No shared files mode enabled, IPC is disabled 00:04:42.144 EAL: Heap on socket 0 was shrunk by 6MB 00:04:42.144 EAL: Trying to obtain current memory policy. 00:04:42.144 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.144 EAL: Restoring previous memory policy: 4 00:04:42.144 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.144 EAL: request: mp_malloc_sync 00:04:42.144 EAL: No shared files mode enabled, IPC is disabled 00:04:42.144 EAL: Heap on socket 0 was expanded by 10MB 00:04:42.144 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.144 EAL: request: mp_malloc_sync 00:04:42.144 EAL: No shared files mode enabled, IPC is disabled 00:04:42.144 EAL: Heap on socket 0 was shrunk by 10MB 00:04:42.144 EAL: Trying to obtain current memory policy. 00:04:42.144 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.144 EAL: Restoring previous memory policy: 4 00:04:42.144 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.144 EAL: request: mp_malloc_sync 00:04:42.144 EAL: No shared files mode enabled, IPC is disabled 00:04:42.144 EAL: Heap on socket 0 was expanded by 18MB 00:04:42.144 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.144 EAL: request: mp_malloc_sync 00:04:42.144 EAL: No shared files mode enabled, IPC is disabled 00:04:42.144 EAL: Heap on socket 0 was shrunk by 18MB 00:04:42.144 EAL: Trying to obtain current memory policy. 00:04:42.144 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.144 EAL: Restoring previous memory policy: 4 00:04:42.144 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.144 EAL: request: mp_malloc_sync 00:04:42.144 EAL: No shared files mode enabled, IPC is disabled 00:04:42.144 EAL: Heap on socket 0 was expanded by 34MB 00:04:42.403 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.403 EAL: request: mp_malloc_sync 00:04:42.403 EAL: No shared files mode enabled, IPC is disabled 00:04:42.403 EAL: Heap on socket 0 was shrunk by 34MB 00:04:42.403 EAL: Trying to obtain current memory policy. 00:04:42.403 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.403 EAL: Restoring previous memory policy: 4 00:04:42.403 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.403 EAL: request: mp_malloc_sync 00:04:42.403 EAL: No shared files mode enabled, IPC is disabled 00:04:42.403 EAL: Heap on socket 0 was expanded by 66MB 00:04:42.403 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.403 EAL: request: mp_malloc_sync 00:04:42.403 EAL: No shared files mode enabled, IPC is disabled 00:04:42.403 EAL: Heap on socket 0 was shrunk by 66MB 00:04:42.403 EAL: Trying to obtain current memory policy. 00:04:42.403 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.403 EAL: Restoring previous memory policy: 4 00:04:42.403 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.403 EAL: request: mp_malloc_sync 00:04:42.403 EAL: No shared files mode enabled, IPC is disabled 00:04:42.403 EAL: Heap on socket 0 was expanded by 130MB 00:04:42.403 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.403 EAL: request: mp_malloc_sync 00:04:42.403 EAL: No shared files mode enabled, IPC is disabled 00:04:42.403 EAL: Heap on socket 0 was shrunk by 130MB 00:04:42.403 EAL: Trying to obtain current memory policy. 00:04:42.403 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.403 EAL: Restoring previous memory policy: 4 00:04:42.403 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.403 EAL: request: mp_malloc_sync 00:04:42.403 EAL: No shared files mode enabled, IPC is disabled 00:04:42.403 EAL: Heap on socket 0 was expanded by 258MB 00:04:42.403 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.403 EAL: request: mp_malloc_sync 00:04:42.403 EAL: No shared files mode enabled, IPC is disabled 00:04:42.403 EAL: Heap on socket 0 was shrunk by 258MB 00:04:42.403 EAL: Trying to obtain current memory policy. 00:04:42.403 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.403 EAL: Restoring previous memory policy: 4 00:04:42.403 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.403 EAL: request: mp_malloc_sync 00:04:42.403 EAL: No shared files mode enabled, IPC is disabled 00:04:42.403 EAL: Heap on socket 0 was expanded by 514MB 00:04:42.662 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.662 EAL: request: mp_malloc_sync 00:04:42.662 EAL: No shared files mode enabled, IPC is disabled 00:04:42.662 EAL: Heap on socket 0 was shrunk by 514MB 00:04:42.662 EAL: Trying to obtain current memory policy. 00:04:42.662 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.662 EAL: Restoring previous memory policy: 4 00:04:42.662 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.662 EAL: request: mp_malloc_sync 00:04:42.662 EAL: No shared files mode enabled, IPC is disabled 00:04:42.662 EAL: Heap on socket 0 was expanded by 1026MB 00:04:42.921 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.921 passed 00:04:42.921 00:04:42.921 Run Summary: Type Total Ran Passed Failed Inactive 00:04:42.921 suites 1 1 n/a 0 0 00:04:42.921 tests 2 2 2 0 0 00:04:42.921 asserts 5218 5218 5218 0 n/a 00:04:42.921 00:04:42.921 Elapsed time = 0.653 seconds 00:04:42.921 EAL: request: mp_malloc_sync 00:04:42.921 EAL: No shared files mode enabled, IPC is disabled 00:04:42.921 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:42.921 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.921 EAL: request: mp_malloc_sync 00:04:42.921 EAL: No shared files mode enabled, IPC is disabled 00:04:42.921 EAL: Heap on socket 0 was shrunk by 2MB 00:04:42.921 EAL: No shared files mode enabled, IPC is disabled 00:04:42.921 EAL: No shared files mode enabled, IPC is disabled 00:04:42.921 EAL: No shared files mode enabled, IPC is disabled 00:04:42.921 00:04:42.921 real 0m0.840s 00:04:42.921 user 0m0.428s 00:04:42.921 sys 0m0.280s 00:04:42.921 02:01:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.921 02:01:57 -- common/autotest_common.sh@10 -- # set +x 00:04:42.921 ************************************ 00:04:42.921 END TEST env_vtophys 00:04:42.921 ************************************ 00:04:42.921 02:01:57 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:42.921 02:01:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:42.921 02:01:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:42.921 02:01:57 -- common/autotest_common.sh@10 -- # set +x 00:04:42.921 ************************************ 00:04:42.921 START TEST env_pci 00:04:42.921 ************************************ 00:04:42.921 02:01:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:42.921 00:04:42.921 00:04:42.921 CUnit - A unit testing framework for C - Version 2.1-3 00:04:42.921 http://cunit.sourceforge.net/ 00:04:42.921 00:04:42.921 00:04:42.921 Suite: pci 00:04:42.921 Test: pci_hook ...[2024-05-14 02:01:57.464105] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 55516 has claimed it 00:04:42.921 passed 00:04:42.921 00:04:42.921 Run Summary: Type Total Ran Passed Failed Inactive 00:04:42.921 suites 1 1 n/a 0 0 00:04:42.921 tests 1 1 1 0 0 00:04:42.921 asserts 25 25 25 0 n/a 00:04:42.921 00:04:42.921 Elapsed time = 0.002 seconds 00:04:42.921 EAL: Cannot find device (10000:00:01.0) 00:04:42.921 EAL: Failed to attach device on primary process 00:04:42.921 00:04:42.921 real 0m0.018s 00:04:42.921 user 0m0.008s 00:04:42.921 sys 0m0.010s 00:04:42.921 02:01:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.921 02:01:57 -- common/autotest_common.sh@10 -- # set +x 00:04:42.921 ************************************ 00:04:42.921 END TEST env_pci 00:04:42.921 ************************************ 00:04:42.921 02:01:57 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:42.921 02:01:57 -- env/env.sh@15 -- # uname 00:04:42.921 02:01:57 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:42.921 02:01:57 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:42.921 02:01:57 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:42.921 02:01:57 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:04:42.921 02:01:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:42.921 02:01:57 -- common/autotest_common.sh@10 -- # set +x 00:04:43.180 ************************************ 00:04:43.180 START TEST env_dpdk_post_init 00:04:43.180 ************************************ 00:04:43.180 02:01:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:43.180 EAL: Detected CPU lcores: 10 00:04:43.180 EAL: Detected NUMA nodes: 1 00:04:43.180 EAL: Detected shared linkage of DPDK 00:04:43.180 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:43.180 EAL: Selected IOVA mode 'PA' 00:04:43.180 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:43.180 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:04:43.180 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:04:43.180 Starting DPDK initialization... 00:04:43.180 Starting SPDK post initialization... 00:04:43.180 SPDK NVMe probe 00:04:43.180 Attaching to 0000:00:06.0 00:04:43.180 Attaching to 0000:00:07.0 00:04:43.180 Attached to 0000:00:06.0 00:04:43.180 Attached to 0000:00:07.0 00:04:43.180 Cleaning up... 00:04:43.180 00:04:43.180 real 0m0.176s 00:04:43.180 user 0m0.049s 00:04:43.180 sys 0m0.027s 00:04:43.180 02:01:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.180 02:01:57 -- common/autotest_common.sh@10 -- # set +x 00:04:43.180 ************************************ 00:04:43.180 END TEST env_dpdk_post_init 00:04:43.180 ************************************ 00:04:43.180 02:01:57 -- env/env.sh@26 -- # uname 00:04:43.180 02:01:57 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:43.180 02:01:57 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:43.180 02:01:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:43.180 02:01:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:43.180 02:01:57 -- common/autotest_common.sh@10 -- # set +x 00:04:43.180 ************************************ 00:04:43.180 START TEST env_mem_callbacks 00:04:43.180 ************************************ 00:04:43.180 02:01:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:43.439 EAL: Detected CPU lcores: 10 00:04:43.439 EAL: Detected NUMA nodes: 1 00:04:43.439 EAL: Detected shared linkage of DPDK 00:04:43.439 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:43.439 EAL: Selected IOVA mode 'PA' 00:04:43.439 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:43.439 00:04:43.439 00:04:43.439 CUnit - A unit testing framework for C - Version 2.1-3 00:04:43.439 http://cunit.sourceforge.net/ 00:04:43.439 00:04:43.439 00:04:43.439 Suite: memory 00:04:43.439 Test: test ... 00:04:43.439 register 0x200000200000 2097152 00:04:43.439 malloc 3145728 00:04:43.439 register 0x200000400000 4194304 00:04:43.439 buf 0x200000500000 len 3145728 PASSED 00:04:43.439 malloc 64 00:04:43.439 buf 0x2000004fff40 len 64 PASSED 00:04:43.439 malloc 4194304 00:04:43.439 register 0x200000800000 6291456 00:04:43.439 buf 0x200000a00000 len 4194304 PASSED 00:04:43.439 free 0x200000500000 3145728 00:04:43.439 free 0x2000004fff40 64 00:04:43.439 unregister 0x200000400000 4194304 PASSED 00:04:43.439 free 0x200000a00000 4194304 00:04:43.439 unregister 0x200000800000 6291456 PASSED 00:04:43.439 malloc 8388608 00:04:43.439 register 0x200000400000 10485760 00:04:43.439 buf 0x200000600000 len 8388608 PASSED 00:04:43.439 free 0x200000600000 8388608 00:04:43.439 unregister 0x200000400000 10485760 PASSED 00:04:43.439 passed 00:04:43.439 00:04:43.439 Run Summary: Type Total Ran Passed Failed Inactive 00:04:43.439 suites 1 1 n/a 0 0 00:04:43.439 tests 1 1 1 0 0 00:04:43.439 asserts 15 15 15 0 n/a 00:04:43.439 00:04:43.439 Elapsed time = 0.007 seconds 00:04:43.439 00:04:43.439 real 0m0.152s 00:04:43.439 user 0m0.023s 00:04:43.439 sys 0m0.028s 00:04:43.439 02:01:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.439 02:01:57 -- common/autotest_common.sh@10 -- # set +x 00:04:43.439 ************************************ 00:04:43.439 END TEST env_mem_callbacks 00:04:43.439 ************************************ 00:04:43.439 ************************************ 00:04:43.439 END TEST env 00:04:43.439 ************************************ 00:04:43.439 00:04:43.439 real 0m1.727s 00:04:43.439 user 0m0.846s 00:04:43.439 sys 0m0.531s 00:04:43.439 02:01:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.439 02:01:57 -- common/autotest_common.sh@10 -- # set +x 00:04:43.439 02:01:57 -- spdk/autotest.sh@176 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:43.439 02:01:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:43.439 02:01:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:43.439 02:01:57 -- common/autotest_common.sh@10 -- # set +x 00:04:43.439 ************************************ 00:04:43.439 START TEST rpc 00:04:43.439 ************************************ 00:04:43.439 02:01:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:43.697 * Looking for test storage... 00:04:43.697 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:43.697 02:01:58 -- rpc/rpc.sh@65 -- # spdk_pid=55625 00:04:43.697 02:01:58 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:43.697 02:01:58 -- rpc/rpc.sh@67 -- # waitforlisten 55625 00:04:43.697 02:01:58 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:43.697 02:01:58 -- common/autotest_common.sh@819 -- # '[' -z 55625 ']' 00:04:43.697 02:01:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.697 02:01:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:43.697 02:01:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.697 02:01:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:43.697 02:01:58 -- common/autotest_common.sh@10 -- # set +x 00:04:43.697 [2024-05-14 02:01:58.123000] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:43.697 [2024-05-14 02:01:58.123110] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55625 ] 00:04:43.697 [2024-05-14 02:01:58.261631] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.955 [2024-05-14 02:01:58.328709] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:43.955 [2024-05-14 02:01:58.328885] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:43.955 [2024-05-14 02:01:58.328904] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 55625' to capture a snapshot of events at runtime. 00:04:43.955 [2024-05-14 02:01:58.328915] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid55625 for offline analysis/debug. 00:04:43.955 [2024-05-14 02:01:58.328943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.523 02:01:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:44.523 02:01:59 -- common/autotest_common.sh@852 -- # return 0 00:04:44.523 02:01:59 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:44.523 02:01:59 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:44.523 02:01:59 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:44.523 02:01:59 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:44.523 02:01:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:44.523 02:01:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:44.523 02:01:59 -- common/autotest_common.sh@10 -- # set +x 00:04:44.523 ************************************ 00:04:44.523 START TEST rpc_integrity 00:04:44.523 ************************************ 00:04:44.523 02:01:59 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:04:44.523 02:01:59 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:44.523 02:01:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:44.523 02:01:59 -- common/autotest_common.sh@10 -- # set +x 00:04:44.523 02:01:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:44.523 02:01:59 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:44.523 02:01:59 -- rpc/rpc.sh@13 -- # jq length 00:04:44.782 02:01:59 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:44.782 02:01:59 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:44.782 02:01:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:44.782 02:01:59 -- common/autotest_common.sh@10 -- # set +x 00:04:44.782 02:01:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:44.782 02:01:59 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:44.782 02:01:59 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:44.782 02:01:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:44.782 02:01:59 -- common/autotest_common.sh@10 -- # set +x 00:04:44.782 02:01:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:44.782 02:01:59 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:44.782 { 00:04:44.782 "aliases": [ 00:04:44.782 "50225c8a-1f33-45df-b661-8522559a6252" 00:04:44.782 ], 00:04:44.782 "assigned_rate_limits": { 00:04:44.782 "r_mbytes_per_sec": 0, 00:04:44.782 "rw_ios_per_sec": 0, 00:04:44.782 "rw_mbytes_per_sec": 0, 00:04:44.782 "w_mbytes_per_sec": 0 00:04:44.782 }, 00:04:44.782 "block_size": 512, 00:04:44.782 "claimed": false, 00:04:44.782 "driver_specific": {}, 00:04:44.782 "memory_domains": [ 00:04:44.782 { 00:04:44.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:44.782 "dma_device_type": 2 00:04:44.782 } 00:04:44.782 ], 00:04:44.782 "name": "Malloc0", 00:04:44.782 "num_blocks": 16384, 00:04:44.782 "product_name": "Malloc disk", 00:04:44.782 "supported_io_types": { 00:04:44.782 "abort": true, 00:04:44.782 "compare": false, 00:04:44.782 "compare_and_write": false, 00:04:44.782 "flush": true, 00:04:44.782 "nvme_admin": false, 00:04:44.782 "nvme_io": false, 00:04:44.782 "read": true, 00:04:44.782 "reset": true, 00:04:44.782 "unmap": true, 00:04:44.782 "write": true, 00:04:44.782 "write_zeroes": true 00:04:44.782 }, 00:04:44.782 "uuid": "50225c8a-1f33-45df-b661-8522559a6252", 00:04:44.782 "zoned": false 00:04:44.782 } 00:04:44.782 ]' 00:04:44.782 02:01:59 -- rpc/rpc.sh@17 -- # jq length 00:04:44.782 02:01:59 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:44.782 02:01:59 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:44.783 02:01:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:44.783 02:01:59 -- common/autotest_common.sh@10 -- # set +x 00:04:44.783 [2024-05-14 02:01:59.219122] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:44.783 [2024-05-14 02:01:59.219180] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:44.783 [2024-05-14 02:01:59.219216] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14233a0 00:04:44.783 [2024-05-14 02:01:59.219235] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:44.783 [2024-05-14 02:01:59.220836] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:44.783 [2024-05-14 02:01:59.220872] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:44.783 Passthru0 00:04:44.783 02:01:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:44.783 02:01:59 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:44.783 02:01:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:44.783 02:01:59 -- common/autotest_common.sh@10 -- # set +x 00:04:44.783 02:01:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:44.783 02:01:59 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:44.783 { 00:04:44.783 "aliases": [ 00:04:44.783 "50225c8a-1f33-45df-b661-8522559a6252" 00:04:44.783 ], 00:04:44.783 "assigned_rate_limits": { 00:04:44.783 "r_mbytes_per_sec": 0, 00:04:44.783 "rw_ios_per_sec": 0, 00:04:44.783 "rw_mbytes_per_sec": 0, 00:04:44.783 "w_mbytes_per_sec": 0 00:04:44.783 }, 00:04:44.783 "block_size": 512, 00:04:44.783 "claim_type": "exclusive_write", 00:04:44.783 "claimed": true, 00:04:44.783 "driver_specific": {}, 00:04:44.783 "memory_domains": [ 00:04:44.783 { 00:04:44.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:44.783 "dma_device_type": 2 00:04:44.783 } 00:04:44.783 ], 00:04:44.783 "name": "Malloc0", 00:04:44.783 "num_blocks": 16384, 00:04:44.783 "product_name": "Malloc disk", 00:04:44.783 "supported_io_types": { 00:04:44.783 "abort": true, 00:04:44.783 "compare": false, 00:04:44.783 "compare_and_write": false, 00:04:44.783 "flush": true, 00:04:44.783 "nvme_admin": false, 00:04:44.783 "nvme_io": false, 00:04:44.783 "read": true, 00:04:44.783 "reset": true, 00:04:44.783 "unmap": true, 00:04:44.783 "write": true, 00:04:44.783 "write_zeroes": true 00:04:44.783 }, 00:04:44.783 "uuid": "50225c8a-1f33-45df-b661-8522559a6252", 00:04:44.783 "zoned": false 00:04:44.783 }, 00:04:44.783 { 00:04:44.783 "aliases": [ 00:04:44.783 "b6386d4e-3f90-584b-816c-acbac8760edd" 00:04:44.783 ], 00:04:44.783 "assigned_rate_limits": { 00:04:44.783 "r_mbytes_per_sec": 0, 00:04:44.783 "rw_ios_per_sec": 0, 00:04:44.783 "rw_mbytes_per_sec": 0, 00:04:44.783 "w_mbytes_per_sec": 0 00:04:44.783 }, 00:04:44.783 "block_size": 512, 00:04:44.783 "claimed": false, 00:04:44.783 "driver_specific": { 00:04:44.783 "passthru": { 00:04:44.783 "base_bdev_name": "Malloc0", 00:04:44.783 "name": "Passthru0" 00:04:44.783 } 00:04:44.783 }, 00:04:44.783 "memory_domains": [ 00:04:44.783 { 00:04:44.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:44.783 "dma_device_type": 2 00:04:44.783 } 00:04:44.783 ], 00:04:44.783 "name": "Passthru0", 00:04:44.783 "num_blocks": 16384, 00:04:44.783 "product_name": "passthru", 00:04:44.783 "supported_io_types": { 00:04:44.783 "abort": true, 00:04:44.783 "compare": false, 00:04:44.783 "compare_and_write": false, 00:04:44.783 "flush": true, 00:04:44.783 "nvme_admin": false, 00:04:44.783 "nvme_io": false, 00:04:44.783 "read": true, 00:04:44.783 "reset": true, 00:04:44.783 "unmap": true, 00:04:44.783 "write": true, 00:04:44.783 "write_zeroes": true 00:04:44.783 }, 00:04:44.783 "uuid": "b6386d4e-3f90-584b-816c-acbac8760edd", 00:04:44.783 "zoned": false 00:04:44.783 } 00:04:44.783 ]' 00:04:44.783 02:01:59 -- rpc/rpc.sh@21 -- # jq length 00:04:44.783 02:01:59 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:44.783 02:01:59 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:44.783 02:01:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:44.783 02:01:59 -- common/autotest_common.sh@10 -- # set +x 00:04:44.783 02:01:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:44.783 02:01:59 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:44.783 02:01:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:44.783 02:01:59 -- common/autotest_common.sh@10 -- # set +x 00:04:44.783 02:01:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:44.783 02:01:59 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:44.783 02:01:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:44.783 02:01:59 -- common/autotest_common.sh@10 -- # set +x 00:04:44.783 02:01:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:44.783 02:01:59 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:44.783 02:01:59 -- rpc/rpc.sh@26 -- # jq length 00:04:45.042 02:01:59 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:45.042 00:04:45.042 real 0m0.302s 00:04:45.042 user 0m0.193s 00:04:45.042 sys 0m0.036s 00:04:45.042 02:01:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:45.042 02:01:59 -- common/autotest_common.sh@10 -- # set +x 00:04:45.042 ************************************ 00:04:45.042 END TEST rpc_integrity 00:04:45.042 ************************************ 00:04:45.042 02:01:59 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:45.042 02:01:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:45.042 02:01:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:45.042 02:01:59 -- common/autotest_common.sh@10 -- # set +x 00:04:45.042 ************************************ 00:04:45.042 START TEST rpc_plugins 00:04:45.042 ************************************ 00:04:45.042 02:01:59 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:04:45.042 02:01:59 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:45.043 02:01:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:45.043 02:01:59 -- common/autotest_common.sh@10 -- # set +x 00:04:45.043 02:01:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:45.043 02:01:59 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:45.043 02:01:59 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:45.043 02:01:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:45.043 02:01:59 -- common/autotest_common.sh@10 -- # set +x 00:04:45.043 02:01:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:45.043 02:01:59 -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:45.043 { 00:04:45.043 "aliases": [ 00:04:45.043 "18103a36-b91d-4f0f-a95e-7fd3bb8cd008" 00:04:45.043 ], 00:04:45.043 "assigned_rate_limits": { 00:04:45.043 "r_mbytes_per_sec": 0, 00:04:45.043 "rw_ios_per_sec": 0, 00:04:45.043 "rw_mbytes_per_sec": 0, 00:04:45.043 "w_mbytes_per_sec": 0 00:04:45.043 }, 00:04:45.043 "block_size": 4096, 00:04:45.043 "claimed": false, 00:04:45.043 "driver_specific": {}, 00:04:45.043 "memory_domains": [ 00:04:45.043 { 00:04:45.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:45.043 "dma_device_type": 2 00:04:45.043 } 00:04:45.043 ], 00:04:45.043 "name": "Malloc1", 00:04:45.043 "num_blocks": 256, 00:04:45.043 "product_name": "Malloc disk", 00:04:45.043 "supported_io_types": { 00:04:45.043 "abort": true, 00:04:45.043 "compare": false, 00:04:45.043 "compare_and_write": false, 00:04:45.043 "flush": true, 00:04:45.043 "nvme_admin": false, 00:04:45.043 "nvme_io": false, 00:04:45.043 "read": true, 00:04:45.043 "reset": true, 00:04:45.043 "unmap": true, 00:04:45.043 "write": true, 00:04:45.043 "write_zeroes": true 00:04:45.043 }, 00:04:45.043 "uuid": "18103a36-b91d-4f0f-a95e-7fd3bb8cd008", 00:04:45.043 "zoned": false 00:04:45.043 } 00:04:45.043 ]' 00:04:45.043 02:01:59 -- rpc/rpc.sh@32 -- # jq length 00:04:45.043 02:01:59 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:45.043 02:01:59 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:45.043 02:01:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:45.043 02:01:59 -- common/autotest_common.sh@10 -- # set +x 00:04:45.043 02:01:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:45.043 02:01:59 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:45.043 02:01:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:45.043 02:01:59 -- common/autotest_common.sh@10 -- # set +x 00:04:45.043 02:01:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:45.043 02:01:59 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:45.043 02:01:59 -- rpc/rpc.sh@36 -- # jq length 00:04:45.043 02:01:59 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:45.043 00:04:45.043 real 0m0.161s 00:04:45.043 user 0m0.106s 00:04:45.043 sys 0m0.016s 00:04:45.043 02:01:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:45.043 ************************************ 00:04:45.043 02:01:59 -- common/autotest_common.sh@10 -- # set +x 00:04:45.043 END TEST rpc_plugins 00:04:45.043 ************************************ 00:04:45.043 02:01:59 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:45.043 02:01:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:45.043 02:01:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:45.043 02:01:59 -- common/autotest_common.sh@10 -- # set +x 00:04:45.043 ************************************ 00:04:45.043 START TEST rpc_trace_cmd_test 00:04:45.043 ************************************ 00:04:45.043 02:01:59 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:04:45.043 02:01:59 -- rpc/rpc.sh@40 -- # local info 00:04:45.043 02:01:59 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:45.043 02:01:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:45.043 02:01:59 -- common/autotest_common.sh@10 -- # set +x 00:04:45.302 02:01:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:45.302 02:01:59 -- rpc/rpc.sh@42 -- # info='{ 00:04:45.302 "bdev": { 00:04:45.302 "mask": "0x8", 00:04:45.302 "tpoint_mask": "0xffffffffffffffff" 00:04:45.302 }, 00:04:45.302 "bdev_nvme": { 00:04:45.302 "mask": "0x4000", 00:04:45.302 "tpoint_mask": "0x0" 00:04:45.302 }, 00:04:45.302 "blobfs": { 00:04:45.302 "mask": "0x80", 00:04:45.302 "tpoint_mask": "0x0" 00:04:45.302 }, 00:04:45.302 "dsa": { 00:04:45.302 "mask": "0x200", 00:04:45.302 "tpoint_mask": "0x0" 00:04:45.302 }, 00:04:45.302 "ftl": { 00:04:45.302 "mask": "0x40", 00:04:45.302 "tpoint_mask": "0x0" 00:04:45.302 }, 00:04:45.302 "iaa": { 00:04:45.302 "mask": "0x1000", 00:04:45.302 "tpoint_mask": "0x0" 00:04:45.302 }, 00:04:45.302 "iscsi_conn": { 00:04:45.302 "mask": "0x2", 00:04:45.302 "tpoint_mask": "0x0" 00:04:45.302 }, 00:04:45.302 "nvme_pcie": { 00:04:45.302 "mask": "0x800", 00:04:45.302 "tpoint_mask": "0x0" 00:04:45.302 }, 00:04:45.302 "nvme_tcp": { 00:04:45.302 "mask": "0x2000", 00:04:45.302 "tpoint_mask": "0x0" 00:04:45.302 }, 00:04:45.302 "nvmf_rdma": { 00:04:45.302 "mask": "0x10", 00:04:45.302 "tpoint_mask": "0x0" 00:04:45.302 }, 00:04:45.302 "nvmf_tcp": { 00:04:45.302 "mask": "0x20", 00:04:45.302 "tpoint_mask": "0x0" 00:04:45.302 }, 00:04:45.302 "scsi": { 00:04:45.302 "mask": "0x4", 00:04:45.302 "tpoint_mask": "0x0" 00:04:45.302 }, 00:04:45.302 "thread": { 00:04:45.302 "mask": "0x400", 00:04:45.302 "tpoint_mask": "0x0" 00:04:45.302 }, 00:04:45.302 "tpoint_group_mask": "0x8", 00:04:45.302 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid55625" 00:04:45.302 }' 00:04:45.302 02:01:59 -- rpc/rpc.sh@43 -- # jq length 00:04:45.302 02:01:59 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:04:45.302 02:01:59 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:45.302 02:01:59 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:45.302 02:01:59 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:45.302 02:01:59 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:45.302 02:01:59 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:45.302 02:01:59 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:45.302 02:01:59 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:45.302 02:01:59 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:45.302 00:04:45.302 real 0m0.248s 00:04:45.302 user 0m0.216s 00:04:45.302 sys 0m0.026s 00:04:45.302 02:01:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:45.302 02:01:59 -- common/autotest_common.sh@10 -- # set +x 00:04:45.302 ************************************ 00:04:45.302 END TEST rpc_trace_cmd_test 00:04:45.302 ************************************ 00:04:45.559 02:01:59 -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:04:45.559 02:01:59 -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:04:45.559 02:01:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:45.559 02:01:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:45.559 02:01:59 -- common/autotest_common.sh@10 -- # set +x 00:04:45.559 ************************************ 00:04:45.559 START TEST go_rpc 00:04:45.559 ************************************ 00:04:45.559 02:01:59 -- common/autotest_common.sh@1104 -- # go_rpc 00:04:45.559 02:01:59 -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:04:45.559 02:01:59 -- rpc/rpc.sh@51 -- # bdevs='[]' 00:04:45.559 02:01:59 -- rpc/rpc.sh@52 -- # jq length 00:04:45.559 02:01:59 -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:04:45.559 02:01:59 -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:04:45.559 02:01:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:45.559 02:01:59 -- common/autotest_common.sh@10 -- # set +x 00:04:45.559 02:01:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:45.559 02:02:00 -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:04:45.559 02:02:00 -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:04:45.559 02:02:00 -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["945aa38c-3d03-4bb0-933c-a9cee9de1f95"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"flush":true,"nvme_admin":false,"nvme_io":false,"read":true,"reset":true,"unmap":true,"write":true,"write_zeroes":true},"uuid":"945aa38c-3d03-4bb0-933c-a9cee9de1f95","zoned":false}]' 00:04:45.559 02:02:00 -- rpc/rpc.sh@57 -- # jq length 00:04:45.559 02:02:00 -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:04:45.559 02:02:00 -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:45.559 02:02:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:45.559 02:02:00 -- common/autotest_common.sh@10 -- # set +x 00:04:45.560 02:02:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:45.560 02:02:00 -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:04:45.560 02:02:00 -- rpc/rpc.sh@60 -- # bdevs='[]' 00:04:45.560 02:02:00 -- rpc/rpc.sh@61 -- # jq length 00:04:45.560 02:02:00 -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:04:45.560 00:04:45.560 real 0m0.221s 00:04:45.560 user 0m0.150s 00:04:45.560 sys 0m0.040s 00:04:45.560 02:02:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:45.560 02:02:00 -- common/autotest_common.sh@10 -- # set +x 00:04:45.560 ************************************ 00:04:45.560 END TEST go_rpc 00:04:45.560 ************************************ 00:04:45.817 02:02:00 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:45.817 02:02:00 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:45.817 02:02:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:45.817 02:02:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:45.817 02:02:00 -- common/autotest_common.sh@10 -- # set +x 00:04:45.817 ************************************ 00:04:45.817 START TEST rpc_daemon_integrity 00:04:45.817 ************************************ 00:04:45.817 02:02:00 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:04:45.817 02:02:00 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:45.817 02:02:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:45.817 02:02:00 -- common/autotest_common.sh@10 -- # set +x 00:04:45.817 02:02:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:45.817 02:02:00 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:45.817 02:02:00 -- rpc/rpc.sh@13 -- # jq length 00:04:45.817 02:02:00 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:45.817 02:02:00 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:45.817 02:02:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:45.817 02:02:00 -- common/autotest_common.sh@10 -- # set +x 00:04:45.817 02:02:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:45.817 02:02:00 -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:04:45.817 02:02:00 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:45.818 02:02:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:45.818 02:02:00 -- common/autotest_common.sh@10 -- # set +x 00:04:45.818 02:02:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:45.818 02:02:00 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:45.818 { 00:04:45.818 "aliases": [ 00:04:45.818 "b5ae6c21-9d31-4166-9fee-d822834691af" 00:04:45.818 ], 00:04:45.818 "assigned_rate_limits": { 00:04:45.818 "r_mbytes_per_sec": 0, 00:04:45.818 "rw_ios_per_sec": 0, 00:04:45.818 "rw_mbytes_per_sec": 0, 00:04:45.818 "w_mbytes_per_sec": 0 00:04:45.818 }, 00:04:45.818 "block_size": 512, 00:04:45.818 "claimed": false, 00:04:45.818 "driver_specific": {}, 00:04:45.818 "memory_domains": [ 00:04:45.818 { 00:04:45.818 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:45.818 "dma_device_type": 2 00:04:45.818 } 00:04:45.818 ], 00:04:45.818 "name": "Malloc3", 00:04:45.818 "num_blocks": 16384, 00:04:45.818 "product_name": "Malloc disk", 00:04:45.818 "supported_io_types": { 00:04:45.818 "abort": true, 00:04:45.818 "compare": false, 00:04:45.818 "compare_and_write": false, 00:04:45.818 "flush": true, 00:04:45.818 "nvme_admin": false, 00:04:45.818 "nvme_io": false, 00:04:45.818 "read": true, 00:04:45.818 "reset": true, 00:04:45.818 "unmap": true, 00:04:45.818 "write": true, 00:04:45.818 "write_zeroes": true 00:04:45.818 }, 00:04:45.818 "uuid": "b5ae6c21-9d31-4166-9fee-d822834691af", 00:04:45.818 "zoned": false 00:04:45.818 } 00:04:45.818 ]' 00:04:45.818 02:02:00 -- rpc/rpc.sh@17 -- # jq length 00:04:45.818 02:02:00 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:45.818 02:02:00 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:04:45.818 02:02:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:45.818 02:02:00 -- common/autotest_common.sh@10 -- # set +x 00:04:45.818 [2024-05-14 02:02:00.331504] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:04:45.818 [2024-05-14 02:02:00.331551] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:45.818 [2024-05-14 02:02:00.331573] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14228c0 00:04:45.818 [2024-05-14 02:02:00.331582] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:45.818 [2024-05-14 02:02:00.332998] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:45.818 [2024-05-14 02:02:00.333034] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:45.818 Passthru0 00:04:45.818 02:02:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:45.818 02:02:00 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:45.818 02:02:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:45.818 02:02:00 -- common/autotest_common.sh@10 -- # set +x 00:04:45.818 02:02:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:45.818 02:02:00 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:45.818 { 00:04:45.818 "aliases": [ 00:04:45.818 "b5ae6c21-9d31-4166-9fee-d822834691af" 00:04:45.818 ], 00:04:45.818 "assigned_rate_limits": { 00:04:45.818 "r_mbytes_per_sec": 0, 00:04:45.818 "rw_ios_per_sec": 0, 00:04:45.818 "rw_mbytes_per_sec": 0, 00:04:45.818 "w_mbytes_per_sec": 0 00:04:45.818 }, 00:04:45.818 "block_size": 512, 00:04:45.818 "claim_type": "exclusive_write", 00:04:45.818 "claimed": true, 00:04:45.818 "driver_specific": {}, 00:04:45.818 "memory_domains": [ 00:04:45.818 { 00:04:45.818 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:45.818 "dma_device_type": 2 00:04:45.818 } 00:04:45.818 ], 00:04:45.818 "name": "Malloc3", 00:04:45.818 "num_blocks": 16384, 00:04:45.818 "product_name": "Malloc disk", 00:04:45.818 "supported_io_types": { 00:04:45.818 "abort": true, 00:04:45.818 "compare": false, 00:04:45.818 "compare_and_write": false, 00:04:45.818 "flush": true, 00:04:45.818 "nvme_admin": false, 00:04:45.818 "nvme_io": false, 00:04:45.818 "read": true, 00:04:45.818 "reset": true, 00:04:45.818 "unmap": true, 00:04:45.818 "write": true, 00:04:45.818 "write_zeroes": true 00:04:45.818 }, 00:04:45.818 "uuid": "b5ae6c21-9d31-4166-9fee-d822834691af", 00:04:45.818 "zoned": false 00:04:45.818 }, 00:04:45.818 { 00:04:45.818 "aliases": [ 00:04:45.818 "d69b5376-aae6-55ed-bbf3-61b1c0e130f2" 00:04:45.818 ], 00:04:45.818 "assigned_rate_limits": { 00:04:45.818 "r_mbytes_per_sec": 0, 00:04:45.818 "rw_ios_per_sec": 0, 00:04:45.818 "rw_mbytes_per_sec": 0, 00:04:45.818 "w_mbytes_per_sec": 0 00:04:45.818 }, 00:04:45.818 "block_size": 512, 00:04:45.818 "claimed": false, 00:04:45.818 "driver_specific": { 00:04:45.818 "passthru": { 00:04:45.818 "base_bdev_name": "Malloc3", 00:04:45.818 "name": "Passthru0" 00:04:45.818 } 00:04:45.818 }, 00:04:45.818 "memory_domains": [ 00:04:45.818 { 00:04:45.818 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:45.818 "dma_device_type": 2 00:04:45.818 } 00:04:45.818 ], 00:04:45.818 "name": "Passthru0", 00:04:45.818 "num_blocks": 16384, 00:04:45.818 "product_name": "passthru", 00:04:45.818 "supported_io_types": { 00:04:45.818 "abort": true, 00:04:45.818 "compare": false, 00:04:45.818 "compare_and_write": false, 00:04:45.818 "flush": true, 00:04:45.818 "nvme_admin": false, 00:04:45.818 "nvme_io": false, 00:04:45.818 "read": true, 00:04:45.818 "reset": true, 00:04:45.818 "unmap": true, 00:04:45.818 "write": true, 00:04:45.818 "write_zeroes": true 00:04:45.818 }, 00:04:45.818 "uuid": "d69b5376-aae6-55ed-bbf3-61b1c0e130f2", 00:04:45.818 "zoned": false 00:04:45.818 } 00:04:45.818 ]' 00:04:45.819 02:02:00 -- rpc/rpc.sh@21 -- # jq length 00:04:46.076 02:02:00 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:46.076 02:02:00 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:46.076 02:02:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:46.076 02:02:00 -- common/autotest_common.sh@10 -- # set +x 00:04:46.076 02:02:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:46.076 02:02:00 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:04:46.076 02:02:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:46.076 02:02:00 -- common/autotest_common.sh@10 -- # set +x 00:04:46.076 02:02:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:46.076 02:02:00 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:46.076 02:02:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:46.076 02:02:00 -- common/autotest_common.sh@10 -- # set +x 00:04:46.076 02:02:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:46.076 02:02:00 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:46.076 02:02:00 -- rpc/rpc.sh@26 -- # jq length 00:04:46.076 02:02:00 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:46.076 00:04:46.076 real 0m0.313s 00:04:46.076 user 0m0.204s 00:04:46.076 sys 0m0.040s 00:04:46.076 02:02:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.076 02:02:00 -- common/autotest_common.sh@10 -- # set +x 00:04:46.076 ************************************ 00:04:46.076 END TEST rpc_daemon_integrity 00:04:46.076 ************************************ 00:04:46.077 02:02:00 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:46.077 02:02:00 -- rpc/rpc.sh@84 -- # killprocess 55625 00:04:46.077 02:02:00 -- common/autotest_common.sh@926 -- # '[' -z 55625 ']' 00:04:46.077 02:02:00 -- common/autotest_common.sh@930 -- # kill -0 55625 00:04:46.077 02:02:00 -- common/autotest_common.sh@931 -- # uname 00:04:46.077 02:02:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:46.077 02:02:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 55625 00:04:46.077 02:02:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:46.077 02:02:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:46.077 killing process with pid 55625 00:04:46.077 02:02:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 55625' 00:04:46.077 02:02:00 -- common/autotest_common.sh@945 -- # kill 55625 00:04:46.077 02:02:00 -- common/autotest_common.sh@950 -- # wait 55625 00:04:46.335 00:04:46.335 real 0m2.856s 00:04:46.335 user 0m3.889s 00:04:46.335 sys 0m0.594s 00:04:46.335 02:02:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.335 02:02:00 -- common/autotest_common.sh@10 -- # set +x 00:04:46.335 ************************************ 00:04:46.335 END TEST rpc 00:04:46.335 ************************************ 00:04:46.335 02:02:00 -- spdk/autotest.sh@177 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:46.335 02:02:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:46.335 02:02:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:46.335 02:02:00 -- common/autotest_common.sh@10 -- # set +x 00:04:46.335 ************************************ 00:04:46.335 START TEST rpc_client 00:04:46.335 ************************************ 00:04:46.335 02:02:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:46.594 * Looking for test storage... 00:04:46.594 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:46.594 02:02:00 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:46.594 OK 00:04:46.594 02:02:00 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:46.594 00:04:46.594 real 0m0.094s 00:04:46.594 user 0m0.039s 00:04:46.594 sys 0m0.062s 00:04:46.594 02:02:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.594 02:02:00 -- common/autotest_common.sh@10 -- # set +x 00:04:46.594 ************************************ 00:04:46.594 END TEST rpc_client 00:04:46.594 ************************************ 00:04:46.594 02:02:01 -- spdk/autotest.sh@178 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:46.594 02:02:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:46.594 02:02:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:46.594 02:02:01 -- common/autotest_common.sh@10 -- # set +x 00:04:46.594 ************************************ 00:04:46.594 START TEST json_config 00:04:46.594 ************************************ 00:04:46.594 02:02:01 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:46.594 02:02:01 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:46.594 02:02:01 -- nvmf/common.sh@7 -- # uname -s 00:04:46.594 02:02:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:46.594 02:02:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:46.594 02:02:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:46.594 02:02:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:46.594 02:02:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:46.594 02:02:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:46.594 02:02:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:46.594 02:02:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:46.594 02:02:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:46.594 02:02:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:46.594 02:02:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:04:46.594 02:02:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:04:46.594 02:02:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:46.594 02:02:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:46.594 02:02:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:46.594 02:02:01 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:46.594 02:02:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:46.594 02:02:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:46.594 02:02:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:46.594 02:02:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:46.594 02:02:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:46.594 02:02:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:46.594 02:02:01 -- paths/export.sh@5 -- # export PATH 00:04:46.594 02:02:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:46.594 02:02:01 -- nvmf/common.sh@46 -- # : 0 00:04:46.594 02:02:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:46.594 02:02:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:46.594 02:02:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:46.594 02:02:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:46.594 02:02:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:46.594 02:02:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:46.594 02:02:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:46.594 02:02:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:46.594 02:02:01 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:04:46.594 02:02:01 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:04:46.595 02:02:01 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:04:46.595 02:02:01 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:46.595 02:02:01 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:04:46.595 02:02:01 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:04:46.595 02:02:01 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:46.595 02:02:01 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:04:46.595 02:02:01 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:46.595 02:02:01 -- json_config/json_config.sh@32 -- # declare -A app_params 00:04:46.595 02:02:01 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:04:46.595 02:02:01 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:04:46.595 02:02:01 -- json_config/json_config.sh@43 -- # last_event_id=0 00:04:46.595 02:02:01 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:46.595 INFO: JSON configuration test init 00:04:46.595 02:02:01 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:04:46.595 02:02:01 -- json_config/json_config.sh@420 -- # json_config_test_init 00:04:46.595 02:02:01 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:04:46.595 02:02:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:46.595 02:02:01 -- common/autotest_common.sh@10 -- # set +x 00:04:46.595 02:02:01 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:04:46.595 02:02:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:46.595 02:02:01 -- common/autotest_common.sh@10 -- # set +x 00:04:46.595 02:02:01 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:04:46.595 02:02:01 -- json_config/json_config.sh@98 -- # local app=target 00:04:46.595 02:02:01 -- json_config/json_config.sh@99 -- # shift 00:04:46.595 02:02:01 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:04:46.595 02:02:01 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:04:46.595 02:02:01 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:04:46.595 02:02:01 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:46.595 02:02:01 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:46.595 02:02:01 -- json_config/json_config.sh@111 -- # app_pid[$app]=55925 00:04:46.595 Waiting for target to run... 00:04:46.595 02:02:01 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:04:46.595 02:02:01 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:46.595 02:02:01 -- json_config/json_config.sh@114 -- # waitforlisten 55925 /var/tmp/spdk_tgt.sock 00:04:46.595 02:02:01 -- common/autotest_common.sh@819 -- # '[' -z 55925 ']' 00:04:46.595 02:02:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:46.595 02:02:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:46.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:46.595 02:02:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:46.595 02:02:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:46.595 02:02:01 -- common/autotest_common.sh@10 -- # set +x 00:04:46.595 [2024-05-14 02:02:01.162897] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:46.595 [2024-05-14 02:02:01.163000] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55925 ] 00:04:47.160 [2024-05-14 02:02:01.461508] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.160 [2024-05-14 02:02:01.514607] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:47.160 [2024-05-14 02:02:01.514801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.724 02:02:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:47.724 00:04:47.724 02:02:02 -- common/autotest_common.sh@852 -- # return 0 00:04:47.724 02:02:02 -- json_config/json_config.sh@115 -- # echo '' 00:04:47.724 02:02:02 -- json_config/json_config.sh@322 -- # create_accel_config 00:04:47.724 02:02:02 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:04:47.724 02:02:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:47.724 02:02:02 -- common/autotest_common.sh@10 -- # set +x 00:04:47.724 02:02:02 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:04:47.724 02:02:02 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:04:47.724 02:02:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:47.725 02:02:02 -- common/autotest_common.sh@10 -- # set +x 00:04:47.725 02:02:02 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:47.725 02:02:02 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:04:47.725 02:02:02 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:48.291 02:02:02 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:04:48.291 02:02:02 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:04:48.291 02:02:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:48.291 02:02:02 -- common/autotest_common.sh@10 -- # set +x 00:04:48.291 02:02:02 -- json_config/json_config.sh@48 -- # local ret=0 00:04:48.291 02:02:02 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:48.291 02:02:02 -- json_config/json_config.sh@49 -- # local enabled_types 00:04:48.291 02:02:02 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:48.291 02:02:02 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:48.291 02:02:02 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:48.549 02:02:02 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:48.549 02:02:02 -- json_config/json_config.sh@51 -- # local get_types 00:04:48.549 02:02:02 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:48.549 02:02:02 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:04:48.549 02:02:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:48.549 02:02:02 -- common/autotest_common.sh@10 -- # set +x 00:04:48.549 02:02:02 -- json_config/json_config.sh@58 -- # return 0 00:04:48.549 02:02:02 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:04:48.549 02:02:02 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:04:48.550 02:02:02 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:04:48.550 02:02:02 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:04:48.550 02:02:02 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:04:48.550 02:02:02 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:04:48.550 02:02:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:48.550 02:02:02 -- common/autotest_common.sh@10 -- # set +x 00:04:48.550 02:02:02 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:48.550 02:02:02 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:04:48.550 02:02:02 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:04:48.550 02:02:02 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:48.550 02:02:02 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:48.807 MallocForNvmf0 00:04:48.807 02:02:03 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:48.807 02:02:03 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:49.066 MallocForNvmf1 00:04:49.066 02:02:03 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:49.066 02:02:03 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:49.323 [2024-05-14 02:02:03.835440] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:49.323 02:02:03 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:49.323 02:02:03 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:49.581 02:02:04 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:49.581 02:02:04 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:49.838 02:02:04 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:49.838 02:02:04 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:50.404 02:02:04 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:50.404 02:02:04 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:50.404 [2024-05-14 02:02:04.924061] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:50.404 02:02:04 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:04:50.404 02:02:04 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:50.404 02:02:04 -- common/autotest_common.sh@10 -- # set +x 00:04:50.404 02:02:04 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:04:50.404 02:02:04 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:50.404 02:02:04 -- common/autotest_common.sh@10 -- # set +x 00:04:50.662 02:02:05 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:04:50.662 02:02:05 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:50.662 02:02:05 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:50.939 MallocBdevForConfigChangeCheck 00:04:50.939 02:02:05 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:04:50.939 02:02:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:50.939 02:02:05 -- common/autotest_common.sh@10 -- # set +x 00:04:50.939 02:02:05 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:04:50.939 02:02:05 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:51.513 INFO: shutting down applications... 00:04:51.513 02:02:05 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:04:51.513 02:02:05 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:04:51.513 02:02:05 -- json_config/json_config.sh@431 -- # json_config_clear target 00:04:51.513 02:02:05 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:04:51.513 02:02:05 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:51.771 Calling clear_iscsi_subsystem 00:04:51.771 Calling clear_nvmf_subsystem 00:04:51.771 Calling clear_nbd_subsystem 00:04:51.771 Calling clear_ublk_subsystem 00:04:51.771 Calling clear_vhost_blk_subsystem 00:04:51.771 Calling clear_vhost_scsi_subsystem 00:04:51.771 Calling clear_scheduler_subsystem 00:04:51.771 Calling clear_bdev_subsystem 00:04:51.771 Calling clear_accel_subsystem 00:04:51.771 Calling clear_vmd_subsystem 00:04:51.771 Calling clear_sock_subsystem 00:04:51.771 Calling clear_iobuf_subsystem 00:04:51.771 02:02:06 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:04:51.771 02:02:06 -- json_config/json_config.sh@396 -- # count=100 00:04:51.771 02:02:06 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:04:51.771 02:02:06 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:51.771 02:02:06 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:51.771 02:02:06 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:04:52.030 02:02:06 -- json_config/json_config.sh@398 -- # break 00:04:52.030 02:02:06 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:04:52.030 02:02:06 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:04:52.030 02:02:06 -- json_config/json_config.sh@120 -- # local app=target 00:04:52.030 02:02:06 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:04:52.030 02:02:06 -- json_config/json_config.sh@124 -- # [[ -n 55925 ]] 00:04:52.030 02:02:06 -- json_config/json_config.sh@127 -- # kill -SIGINT 55925 00:04:52.030 02:02:06 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:04:52.030 02:02:06 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:04:52.030 02:02:06 -- json_config/json_config.sh@130 -- # kill -0 55925 00:04:52.030 02:02:06 -- json_config/json_config.sh@134 -- # sleep 0.5 00:04:52.596 02:02:07 -- json_config/json_config.sh@129 -- # (( i++ )) 00:04:52.596 02:02:07 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:04:52.596 02:02:07 -- json_config/json_config.sh@130 -- # kill -0 55925 00:04:52.596 02:02:07 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:04:52.596 02:02:07 -- json_config/json_config.sh@132 -- # break 00:04:52.596 02:02:07 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:04:52.596 SPDK target shutdown done 00:04:52.596 02:02:07 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:04:52.596 INFO: relaunching applications... 00:04:52.596 02:02:07 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:04:52.596 02:02:07 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:52.596 02:02:07 -- json_config/json_config.sh@98 -- # local app=target 00:04:52.596 02:02:07 -- json_config/json_config.sh@99 -- # shift 00:04:52.596 02:02:07 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:04:52.596 02:02:07 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:04:52.596 02:02:07 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:04:52.596 02:02:07 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:52.596 02:02:07 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:52.596 02:02:07 -- json_config/json_config.sh@111 -- # app_pid[$app]=56205 00:04:52.596 Waiting for target to run... 00:04:52.596 02:02:07 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:04:52.596 02:02:07 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:52.596 02:02:07 -- json_config/json_config.sh@114 -- # waitforlisten 56205 /var/tmp/spdk_tgt.sock 00:04:52.596 02:02:07 -- common/autotest_common.sh@819 -- # '[' -z 56205 ']' 00:04:52.596 02:02:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:52.596 02:02:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:52.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:52.596 02:02:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:52.596 02:02:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:52.596 02:02:07 -- common/autotest_common.sh@10 -- # set +x 00:04:52.596 [2024-05-14 02:02:07.092423] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:52.596 [2024-05-14 02:02:07.092521] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56205 ] 00:04:52.855 [2024-05-14 02:02:07.398945] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.113 [2024-05-14 02:02:07.444669] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:53.113 [2024-05-14 02:02:07.444859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.371 [2024-05-14 02:02:07.733696] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:53.371 [2024-05-14 02:02:07.765795] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:53.627 02:02:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:53.627 02:02:08 -- common/autotest_common.sh@852 -- # return 0 00:04:53.627 00:04:53.627 02:02:08 -- json_config/json_config.sh@115 -- # echo '' 00:04:53.627 02:02:08 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:04:53.627 INFO: Checking if target configuration is the same... 00:04:53.627 02:02:08 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:53.627 02:02:08 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:53.627 02:02:08 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:04:53.627 02:02:08 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:53.627 + '[' 2 -ne 2 ']' 00:04:53.627 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:53.627 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:53.627 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:53.627 +++ basename /dev/fd/62 00:04:53.627 ++ mktemp /tmp/62.XXX 00:04:53.627 + tmp_file_1=/tmp/62.25y 00:04:53.627 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:53.627 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:53.627 + tmp_file_2=/tmp/spdk_tgt_config.json.5ez 00:04:53.627 + ret=0 00:04:53.627 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:54.195 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:54.195 + diff -u /tmp/62.25y /tmp/spdk_tgt_config.json.5ez 00:04:54.195 INFO: JSON config files are the same 00:04:54.195 + echo 'INFO: JSON config files are the same' 00:04:54.195 + rm /tmp/62.25y /tmp/spdk_tgt_config.json.5ez 00:04:54.195 + exit 0 00:04:54.195 02:02:08 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:04:54.195 INFO: changing configuration and checking if this can be detected... 00:04:54.195 02:02:08 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:54.195 02:02:08 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:54.195 02:02:08 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:54.452 02:02:08 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:54.452 02:02:08 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:04:54.452 02:02:08 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:54.452 + '[' 2 -ne 2 ']' 00:04:54.452 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:54.452 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:54.452 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:54.452 +++ basename /dev/fd/62 00:04:54.452 ++ mktemp /tmp/62.XXX 00:04:54.452 + tmp_file_1=/tmp/62.9nT 00:04:54.452 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:54.452 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:54.452 + tmp_file_2=/tmp/spdk_tgt_config.json.m6B 00:04:54.452 + ret=0 00:04:54.452 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:55.019 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:55.019 + diff -u /tmp/62.9nT /tmp/spdk_tgt_config.json.m6B 00:04:55.019 + ret=1 00:04:55.019 + echo '=== Start of file: /tmp/62.9nT ===' 00:04:55.019 + cat /tmp/62.9nT 00:04:55.019 + echo '=== End of file: /tmp/62.9nT ===' 00:04:55.019 + echo '' 00:04:55.019 + echo '=== Start of file: /tmp/spdk_tgt_config.json.m6B ===' 00:04:55.019 + cat /tmp/spdk_tgt_config.json.m6B 00:04:55.019 + echo '=== End of file: /tmp/spdk_tgt_config.json.m6B ===' 00:04:55.019 + echo '' 00:04:55.019 + rm /tmp/62.9nT /tmp/spdk_tgt_config.json.m6B 00:04:55.019 + exit 1 00:04:55.019 INFO: configuration change detected. 00:04:55.019 02:02:09 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:04:55.019 02:02:09 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:04:55.019 02:02:09 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:04:55.019 02:02:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:55.019 02:02:09 -- common/autotest_common.sh@10 -- # set +x 00:04:55.019 02:02:09 -- json_config/json_config.sh@360 -- # local ret=0 00:04:55.019 02:02:09 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:04:55.019 02:02:09 -- json_config/json_config.sh@370 -- # [[ -n 56205 ]] 00:04:55.019 02:02:09 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:04:55.019 02:02:09 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:04:55.019 02:02:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:55.019 02:02:09 -- common/autotest_common.sh@10 -- # set +x 00:04:55.019 02:02:09 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:04:55.019 02:02:09 -- json_config/json_config.sh@246 -- # uname -s 00:04:55.019 02:02:09 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:04:55.019 02:02:09 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:04:55.019 02:02:09 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:04:55.019 02:02:09 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:04:55.019 02:02:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:55.019 02:02:09 -- common/autotest_common.sh@10 -- # set +x 00:04:55.019 02:02:09 -- json_config/json_config.sh@376 -- # killprocess 56205 00:04:55.019 02:02:09 -- common/autotest_common.sh@926 -- # '[' -z 56205 ']' 00:04:55.019 02:02:09 -- common/autotest_common.sh@930 -- # kill -0 56205 00:04:55.019 02:02:09 -- common/autotest_common.sh@931 -- # uname 00:04:55.019 02:02:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:55.019 02:02:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 56205 00:04:55.019 02:02:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:55.019 02:02:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:55.019 killing process with pid 56205 00:04:55.019 02:02:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 56205' 00:04:55.019 02:02:09 -- common/autotest_common.sh@945 -- # kill 56205 00:04:55.019 02:02:09 -- common/autotest_common.sh@950 -- # wait 56205 00:04:55.278 02:02:09 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:55.278 02:02:09 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:04:55.278 02:02:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:55.278 02:02:09 -- common/autotest_common.sh@10 -- # set +x 00:04:55.278 02:02:09 -- json_config/json_config.sh@381 -- # return 0 00:04:55.278 INFO: Success 00:04:55.278 02:02:09 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:04:55.278 00:04:55.278 real 0m8.693s 00:04:55.278 user 0m12.980s 00:04:55.278 sys 0m1.507s 00:04:55.278 02:02:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.278 02:02:09 -- common/autotest_common.sh@10 -- # set +x 00:04:55.278 ************************************ 00:04:55.278 END TEST json_config 00:04:55.278 ************************************ 00:04:55.278 02:02:09 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:55.278 02:02:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:55.278 02:02:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:55.278 02:02:09 -- common/autotest_common.sh@10 -- # set +x 00:04:55.278 ************************************ 00:04:55.278 START TEST json_config_extra_key 00:04:55.278 ************************************ 00:04:55.278 02:02:09 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:55.278 02:02:09 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:55.278 02:02:09 -- nvmf/common.sh@7 -- # uname -s 00:04:55.278 02:02:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:55.278 02:02:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:55.278 02:02:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:55.278 02:02:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:55.278 02:02:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:55.278 02:02:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:55.278 02:02:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:55.278 02:02:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:55.278 02:02:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:55.278 02:02:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:55.278 02:02:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:04:55.278 02:02:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:04:55.278 02:02:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:55.278 02:02:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:55.278 02:02:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:55.278 02:02:09 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:55.278 02:02:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:55.278 02:02:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:55.278 02:02:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:55.278 02:02:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.278 02:02:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.278 02:02:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.278 02:02:09 -- paths/export.sh@5 -- # export PATH 00:04:55.278 02:02:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.278 02:02:09 -- nvmf/common.sh@46 -- # : 0 00:04:55.278 02:02:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:55.278 02:02:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:55.278 02:02:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:55.278 02:02:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:55.278 02:02:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:55.278 02:02:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:55.278 02:02:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:55.278 02:02:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:55.278 02:02:09 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:04:55.278 02:02:09 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:04:55.278 02:02:09 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:55.278 02:02:09 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:04:55.278 02:02:09 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:55.278 02:02:09 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:04:55.278 02:02:09 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:55.278 02:02:09 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:04:55.278 02:02:09 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:55.278 INFO: launching applications... 00:04:55.278 02:02:09 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:04:55.279 02:02:09 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:55.279 02:02:09 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:04:55.279 02:02:09 -- json_config/json_config_extra_key.sh@25 -- # shift 00:04:55.279 02:02:09 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:04:55.279 02:02:09 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:04:55.279 02:02:09 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=56380 00:04:55.279 Waiting for target to run... 00:04:55.279 02:02:09 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:04:55.279 02:02:09 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:55.279 02:02:09 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 56380 /var/tmp/spdk_tgt.sock 00:04:55.279 02:02:09 -- common/autotest_common.sh@819 -- # '[' -z 56380 ']' 00:04:55.279 02:02:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:55.279 02:02:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:55.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:55.279 02:02:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:55.279 02:02:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:55.279 02:02:09 -- common/autotest_common.sh@10 -- # set +x 00:04:55.537 [2024-05-14 02:02:09.889820] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:55.537 [2024-05-14 02:02:09.889961] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56380 ] 00:04:55.795 [2024-05-14 02:02:10.195876] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.795 [2024-05-14 02:02:10.249895] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:55.795 [2024-05-14 02:02:10.250086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.362 02:02:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:56.362 02:02:10 -- common/autotest_common.sh@852 -- # return 0 00:04:56.362 00:04:56.362 02:02:10 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:04:56.362 INFO: shutting down applications... 00:04:56.362 02:02:10 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:04:56.362 02:02:10 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:04:56.362 02:02:10 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:04:56.362 02:02:10 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:04:56.362 02:02:10 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 56380 ]] 00:04:56.362 02:02:10 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 56380 00:04:56.362 02:02:10 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:04:56.362 02:02:10 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:56.362 02:02:10 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56380 00:04:56.362 02:02:10 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:56.929 02:02:11 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:56.929 02:02:11 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:56.929 02:02:11 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56380 00:04:56.929 02:02:11 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:04:56.929 02:02:11 -- json_config/json_config_extra_key.sh@52 -- # break 00:04:56.929 02:02:11 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:04:56.929 SPDK target shutdown done 00:04:56.929 02:02:11 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:04:56.929 Success 00:04:56.929 02:02:11 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:04:56.929 00:04:56.929 real 0m1.593s 00:04:56.929 user 0m1.504s 00:04:56.929 sys 0m0.307s 00:04:56.929 02:02:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.929 02:02:11 -- common/autotest_common.sh@10 -- # set +x 00:04:56.929 ************************************ 00:04:56.929 END TEST json_config_extra_key 00:04:56.929 ************************************ 00:04:56.929 02:02:11 -- spdk/autotest.sh@180 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:56.929 02:02:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:56.929 02:02:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:56.929 02:02:11 -- common/autotest_common.sh@10 -- # set +x 00:04:56.929 ************************************ 00:04:56.929 START TEST alias_rpc 00:04:56.929 ************************************ 00:04:56.929 02:02:11 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:56.929 * Looking for test storage... 00:04:56.929 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:56.929 02:02:11 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:56.929 02:02:11 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=56450 00:04:56.929 02:02:11 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:56.929 02:02:11 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 56450 00:04:56.929 02:02:11 -- common/autotest_common.sh@819 -- # '[' -z 56450 ']' 00:04:56.929 02:02:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.929 02:02:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:56.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.929 02:02:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.929 02:02:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:56.929 02:02:11 -- common/autotest_common.sh@10 -- # set +x 00:04:57.187 [2024-05-14 02:02:11.525256] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:57.187 [2024-05-14 02:02:11.525379] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56450 ] 00:04:57.187 [2024-05-14 02:02:11.660464] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.187 [2024-05-14 02:02:11.719009] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:57.187 [2024-05-14 02:02:11.719179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.120 02:02:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:58.120 02:02:12 -- common/autotest_common.sh@852 -- # return 0 00:04:58.120 02:02:12 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:58.377 02:02:12 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 56450 00:04:58.377 02:02:12 -- common/autotest_common.sh@926 -- # '[' -z 56450 ']' 00:04:58.377 02:02:12 -- common/autotest_common.sh@930 -- # kill -0 56450 00:04:58.377 02:02:12 -- common/autotest_common.sh@931 -- # uname 00:04:58.377 02:02:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:58.377 02:02:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 56450 00:04:58.377 02:02:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:58.377 02:02:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:58.377 killing process with pid 56450 00:04:58.377 02:02:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 56450' 00:04:58.377 02:02:12 -- common/autotest_common.sh@945 -- # kill 56450 00:04:58.377 02:02:12 -- common/autotest_common.sh@950 -- # wait 56450 00:04:58.635 00:04:58.635 real 0m1.732s 00:04:58.635 user 0m2.142s 00:04:58.635 sys 0m0.323s 00:04:58.635 02:02:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.635 02:02:13 -- common/autotest_common.sh@10 -- # set +x 00:04:58.635 ************************************ 00:04:58.635 END TEST alias_rpc 00:04:58.635 ************************************ 00:04:58.635 02:02:13 -- spdk/autotest.sh@182 -- # [[ 1 -eq 0 ]] 00:04:58.635 02:02:13 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:58.635 02:02:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:58.635 02:02:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:58.635 02:02:13 -- common/autotest_common.sh@10 -- # set +x 00:04:58.635 ************************************ 00:04:58.635 START TEST dpdk_mem_utility 00:04:58.635 ************************************ 00:04:58.635 02:02:13 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:58.635 * Looking for test storage... 00:04:58.894 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:58.894 02:02:13 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:58.894 02:02:13 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=56541 00:04:58.894 02:02:13 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:58.894 02:02:13 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 56541 00:04:58.894 02:02:13 -- common/autotest_common.sh@819 -- # '[' -z 56541 ']' 00:04:58.894 02:02:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.894 02:02:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:58.894 02:02:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.894 02:02:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:58.894 02:02:13 -- common/autotest_common.sh@10 -- # set +x 00:04:58.894 [2024-05-14 02:02:13.293121] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:58.894 [2024-05-14 02:02:13.293218] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56541 ] 00:04:58.894 [2024-05-14 02:02:13.434742] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.151 [2024-05-14 02:02:13.496557] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:59.151 [2024-05-14 02:02:13.496719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.715 02:02:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:59.715 02:02:14 -- common/autotest_common.sh@852 -- # return 0 00:04:59.715 02:02:14 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:59.715 02:02:14 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:59.715 02:02:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:59.715 02:02:14 -- common/autotest_common.sh@10 -- # set +x 00:04:59.715 { 00:04:59.715 "filename": "/tmp/spdk_mem_dump.txt" 00:04:59.715 } 00:04:59.715 02:02:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:59.715 02:02:14 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:59.974 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:59.974 1 heaps totaling size 814.000000 MiB 00:04:59.974 size: 814.000000 MiB heap id: 0 00:04:59.974 end heaps---------- 00:04:59.974 8 mempools totaling size 598.116089 MiB 00:04:59.974 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:59.974 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:59.974 size: 84.521057 MiB name: bdev_io_56541 00:04:59.974 size: 51.011292 MiB name: evtpool_56541 00:04:59.974 size: 50.003479 MiB name: msgpool_56541 00:04:59.974 size: 21.763794 MiB name: PDU_Pool 00:04:59.974 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:59.974 size: 0.026123 MiB name: Session_Pool 00:04:59.974 end mempools------- 00:04:59.974 6 memzones totaling size 4.142822 MiB 00:04:59.974 size: 1.000366 MiB name: RG_ring_0_56541 00:04:59.974 size: 1.000366 MiB name: RG_ring_1_56541 00:04:59.974 size: 1.000366 MiB name: RG_ring_4_56541 00:04:59.974 size: 1.000366 MiB name: RG_ring_5_56541 00:04:59.974 size: 0.125366 MiB name: RG_ring_2_56541 00:04:59.974 size: 0.015991 MiB name: RG_ring_3_56541 00:04:59.974 end memzones------- 00:04:59.974 02:02:14 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:59.974 heap id: 0 total size: 814.000000 MiB number of busy elements: 223 number of free elements: 15 00:04:59.974 list of free elements. size: 12.486023 MiB 00:04:59.974 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:59.974 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:59.974 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:59.974 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:59.974 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:59.974 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:59.974 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:59.974 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:59.974 element at address: 0x200000200000 with size: 0.837219 MiB 00:04:59.974 element at address: 0x20001aa00000 with size: 0.572266 MiB 00:04:59.974 element at address: 0x20000b200000 with size: 0.489441 MiB 00:04:59.974 element at address: 0x200000800000 with size: 0.486877 MiB 00:04:59.974 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:59.974 element at address: 0x200027e00000 with size: 0.397949 MiB 00:04:59.974 element at address: 0x200003a00000 with size: 0.351501 MiB 00:04:59.974 list of standard malloc elements. size: 199.251404 MiB 00:04:59.974 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:59.974 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:59.974 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:59.974 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:59.974 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:59.974 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:59.974 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:59.974 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:59.974 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:59.974 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:04:59.974 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:04:59.974 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:04:59.974 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:04:59.974 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:04:59.974 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:04:59.974 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:04:59.974 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:04:59.974 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:04:59.974 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:04:59.974 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:04:59.974 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:04:59.974 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:04:59.974 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:04:59.974 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:04:59.974 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:04:59.974 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:04:59.974 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:04:59.974 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:04:59.974 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:04:59.974 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:04:59.974 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:04:59.974 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:04:59.974 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:04:59.974 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:04:59.974 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:04:59.974 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:04:59.974 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:59.974 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:59.974 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:59.974 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:59.974 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:04:59.974 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:04:59.974 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:04:59.974 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:04:59.974 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:04:59.974 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:59.974 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:59.974 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:59.974 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:04:59.974 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:04:59.974 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:04:59.974 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:04:59.974 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:04:59.974 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:04:59.974 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:04:59.974 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:04:59.974 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:04:59.974 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:04:59.974 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:04:59.974 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:04:59.974 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:04:59.974 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:04:59.974 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:04:59.974 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:04:59.974 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:04:59.974 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:04:59.974 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:04:59.974 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:04:59.974 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:04:59.974 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:04:59.974 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:59.974 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:59.975 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:59.975 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:59.975 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:59.975 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:59.975 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:59.975 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:59.975 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:04:59.975 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:04:59.975 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:04:59.975 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:04:59.975 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:04:59.975 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:04:59.975 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:04:59.975 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:59.975 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:59.975 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:59.975 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:59.975 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:59.975 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:59.975 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:59.975 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:04:59.975 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:04:59.975 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:04:59.975 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:04:59.975 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:04:59.975 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:04:59.975 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:04:59.975 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:04:59.975 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:04:59.975 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:04:59.975 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:04:59.975 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:04:59.975 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:04:59.975 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:04:59.975 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:04:59.975 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:04:59.975 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:04:59.975 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:04:59.975 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:04:59.975 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:04:59.975 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:04:59.975 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:04:59.975 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:04:59.975 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:04:59.975 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:04:59.975 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:04:59.975 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:04:59.975 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:04:59.975 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:04:59.975 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:04:59.975 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:04:59.975 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:04:59.975 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:04:59.975 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:04:59.975 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:04:59.975 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:04:59.975 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:04:59.975 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:04:59.975 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:04:59.975 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:04:59.975 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:04:59.975 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:04:59.975 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:04:59.975 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:04:59.975 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:04:59.975 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:04:59.975 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:04:59.975 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:04:59.975 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:04:59.975 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:04:59.975 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:04:59.975 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:04:59.975 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:04:59.975 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:04:59.975 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:04:59.975 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:04:59.975 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:04:59.975 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:04:59.975 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:59.975 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:59.975 element at address: 0x200027e65e00 with size: 0.000183 MiB 00:04:59.975 element at address: 0x200027e65ec0 with size: 0.000183 MiB 00:04:59.975 element at address: 0x200027e6cac0 with size: 0.000183 MiB 00:04:59.975 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:04:59.975 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:04:59.975 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:04:59.975 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:04:59.975 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:04:59.975 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:04:59.975 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:04:59.975 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:04:59.975 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:04:59.975 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:04:59.975 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:04:59.975 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:04:59.975 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:04:59.975 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:04:59.975 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:04:59.975 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:04:59.975 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:04:59.975 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:04:59.975 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:04:59.975 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:04:59.975 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:04:59.975 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:04:59.975 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:04:59.975 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:04:59.975 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:04:59.975 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:04:59.975 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:04:59.975 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:04:59.975 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:04:59.975 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:04:59.975 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:04:59.975 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:04:59.975 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:04:59.975 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:04:59.975 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:04:59.975 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:04:59.975 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:04:59.975 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:04:59.975 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:04:59.975 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:04:59.975 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:04:59.975 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:04:59.975 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:04:59.975 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:04:59.975 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:04:59.975 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:04:59.975 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:04:59.975 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:04:59.975 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:04:59.975 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:04:59.975 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:04:59.975 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:04:59.975 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:04:59.975 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:04:59.975 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:04:59.975 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:04:59.975 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:04:59.976 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:04:59.976 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:04:59.976 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:04:59.976 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:04:59.976 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:04:59.976 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:04:59.976 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:04:59.976 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:04:59.976 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:04:59.976 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:59.976 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:59.976 list of memzone associated elements. size: 602.262573 MiB 00:04:59.976 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:59.976 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:59.976 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:59.976 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:59.976 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:59.976 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_56541_0 00:04:59.976 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:59.976 associated memzone info: size: 48.002930 MiB name: MP_evtpool_56541_0 00:04:59.976 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:59.976 associated memzone info: size: 48.002930 MiB name: MP_msgpool_56541_0 00:04:59.976 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:59.976 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:59.976 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:59.976 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:59.976 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:59.976 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_56541 00:04:59.976 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:59.976 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_56541 00:04:59.976 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:59.976 associated memzone info: size: 1.007996 MiB name: MP_evtpool_56541 00:04:59.976 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:59.976 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:59.976 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:59.976 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:59.976 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:59.976 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:59.976 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:59.976 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:59.976 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:59.976 associated memzone info: size: 1.000366 MiB name: RG_ring_0_56541 00:04:59.976 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:59.976 associated memzone info: size: 1.000366 MiB name: RG_ring_1_56541 00:04:59.976 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:59.976 associated memzone info: size: 1.000366 MiB name: RG_ring_4_56541 00:04:59.976 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:59.976 associated memzone info: size: 1.000366 MiB name: RG_ring_5_56541 00:04:59.976 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:59.976 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_56541 00:04:59.976 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:59.976 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:59.976 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:59.976 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:59.976 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:59.976 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:59.976 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:59.976 associated memzone info: size: 0.125366 MiB name: RG_ring_2_56541 00:04:59.976 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:59.976 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:59.976 element at address: 0x200027e65f80 with size: 0.023743 MiB 00:04:59.976 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:59.976 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:59.976 associated memzone info: size: 0.015991 MiB name: RG_ring_3_56541 00:04:59.976 element at address: 0x200027e6c0c0 with size: 0.002441 MiB 00:04:59.976 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:59.976 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:04:59.976 associated memzone info: size: 0.000183 MiB name: MP_msgpool_56541 00:04:59.976 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:59.976 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_56541 00:04:59.976 element at address: 0x200027e6cb80 with size: 0.000305 MiB 00:04:59.976 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:59.976 02:02:14 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:59.976 02:02:14 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 56541 00:04:59.976 02:02:14 -- common/autotest_common.sh@926 -- # '[' -z 56541 ']' 00:04:59.976 02:02:14 -- common/autotest_common.sh@930 -- # kill -0 56541 00:04:59.976 02:02:14 -- common/autotest_common.sh@931 -- # uname 00:04:59.976 02:02:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:59.976 02:02:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 56541 00:04:59.976 02:02:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:59.976 02:02:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:59.976 killing process with pid 56541 00:04:59.976 02:02:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 56541' 00:04:59.976 02:02:14 -- common/autotest_common.sh@945 -- # kill 56541 00:04:59.976 02:02:14 -- common/autotest_common.sh@950 -- # wait 56541 00:05:00.234 ************************************ 00:05:00.234 END TEST dpdk_mem_utility 00:05:00.234 ************************************ 00:05:00.234 00:05:00.234 real 0m1.566s 00:05:00.234 user 0m1.817s 00:05:00.234 sys 0m0.334s 00:05:00.234 02:02:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.234 02:02:14 -- common/autotest_common.sh@10 -- # set +x 00:05:00.234 02:02:14 -- spdk/autotest.sh@187 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:00.234 02:02:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:00.234 02:02:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:00.234 02:02:14 -- common/autotest_common.sh@10 -- # set +x 00:05:00.234 ************************************ 00:05:00.234 START TEST event 00:05:00.234 ************************************ 00:05:00.234 02:02:14 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:00.492 * Looking for test storage... 00:05:00.492 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:00.492 02:02:14 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:00.492 02:02:14 -- bdev/nbd_common.sh@6 -- # set -e 00:05:00.492 02:02:14 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:00.492 02:02:14 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:05:00.492 02:02:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:00.492 02:02:14 -- common/autotest_common.sh@10 -- # set +x 00:05:00.492 ************************************ 00:05:00.492 START TEST event_perf 00:05:00.492 ************************************ 00:05:00.492 02:02:14 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:00.492 Running I/O for 1 seconds...[2024-05-14 02:02:14.869993] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:00.492 [2024-05-14 02:02:14.870070] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56630 ] 00:05:00.492 [2024-05-14 02:02:15.009240] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:00.748 [2024-05-14 02:02:15.084823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:00.748 [2024-05-14 02:02:15.084937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:00.748 [2024-05-14 02:02:15.085289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:00.748 [2024-05-14 02:02:15.085295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.682 Running I/O for 1 seconds... 00:05:01.682 lcore 0: 180891 00:05:01.682 lcore 1: 180893 00:05:01.682 lcore 2: 180897 00:05:01.682 lcore 3: 180899 00:05:01.682 done. 00:05:01.682 00:05:01.682 real 0m1.332s 00:05:01.682 user 0m4.149s 00:05:01.682 sys 0m0.046s 00:05:01.682 02:02:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.682 02:02:16 -- common/autotest_common.sh@10 -- # set +x 00:05:01.682 ************************************ 00:05:01.682 END TEST event_perf 00:05:01.682 ************************************ 00:05:01.682 02:02:16 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:01.682 02:02:16 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:05:01.682 02:02:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:01.682 02:02:16 -- common/autotest_common.sh@10 -- # set +x 00:05:01.682 ************************************ 00:05:01.682 START TEST event_reactor 00:05:01.682 ************************************ 00:05:01.682 02:02:16 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:01.682 [2024-05-14 02:02:16.242099] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:01.682 [2024-05-14 02:02:16.242628] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56668 ] 00:05:01.962 [2024-05-14 02:02:16.375188] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.962 [2024-05-14 02:02:16.435097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.347 test_start 00:05:03.347 oneshot 00:05:03.347 tick 100 00:05:03.347 tick 100 00:05:03.347 tick 250 00:05:03.347 tick 100 00:05:03.347 tick 100 00:05:03.347 tick 250 00:05:03.347 tick 500 00:05:03.347 tick 100 00:05:03.347 tick 100 00:05:03.347 tick 100 00:05:03.347 tick 250 00:05:03.347 tick 100 00:05:03.347 tick 100 00:05:03.347 test_end 00:05:03.347 00:05:03.347 real 0m1.303s 00:05:03.347 user 0m1.159s 00:05:03.347 sys 0m0.036s 00:05:03.347 02:02:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.347 02:02:17 -- common/autotest_common.sh@10 -- # set +x 00:05:03.347 ************************************ 00:05:03.347 END TEST event_reactor 00:05:03.347 ************************************ 00:05:03.347 02:02:17 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:03.347 02:02:17 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:05:03.347 02:02:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:03.347 02:02:17 -- common/autotest_common.sh@10 -- # set +x 00:05:03.348 ************************************ 00:05:03.348 START TEST event_reactor_perf 00:05:03.348 ************************************ 00:05:03.348 02:02:17 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:03.348 [2024-05-14 02:02:17.594290] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:03.348 [2024-05-14 02:02:17.594370] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56698 ] 00:05:03.348 [2024-05-14 02:02:17.733259] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.348 [2024-05-14 02:02:17.801797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.721 test_start 00:05:04.721 test_end 00:05:04.721 Performance: 336036 events per second 00:05:04.721 00:05:04.721 real 0m1.320s 00:05:04.721 user 0m1.167s 00:05:04.721 sys 0m0.046s 00:05:04.721 02:02:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.721 02:02:18 -- common/autotest_common.sh@10 -- # set +x 00:05:04.721 ************************************ 00:05:04.721 END TEST event_reactor_perf 00:05:04.721 ************************************ 00:05:04.721 02:02:18 -- event/event.sh@49 -- # uname -s 00:05:04.721 02:02:18 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:04.721 02:02:18 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:04.721 02:02:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:04.721 02:02:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:04.721 02:02:18 -- common/autotest_common.sh@10 -- # set +x 00:05:04.721 ************************************ 00:05:04.721 START TEST event_scheduler 00:05:04.722 ************************************ 00:05:04.722 02:02:18 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:04.722 * Looking for test storage... 00:05:04.722 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:04.722 02:02:19 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:04.722 02:02:19 -- scheduler/scheduler.sh@35 -- # scheduler_pid=56759 00:05:04.722 02:02:19 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:04.722 02:02:19 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:04.722 02:02:19 -- scheduler/scheduler.sh@37 -- # waitforlisten 56759 00:05:04.722 02:02:19 -- common/autotest_common.sh@819 -- # '[' -z 56759 ']' 00:05:04.722 02:02:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:04.722 02:02:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:04.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:04.722 02:02:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:04.722 02:02:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:04.722 02:02:19 -- common/autotest_common.sh@10 -- # set +x 00:05:04.722 [2024-05-14 02:02:19.076403] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:04.722 [2024-05-14 02:02:19.076547] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56759 ] 00:05:04.722 [2024-05-14 02:02:19.221967] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:04.722 [2024-05-14 02:02:19.293054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.722 [2024-05-14 02:02:19.293120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:04.722 [2024-05-14 02:02:19.293195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:04.722 [2024-05-14 02:02:19.293217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:05.655 02:02:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:05.655 02:02:20 -- common/autotest_common.sh@852 -- # return 0 00:05:05.655 02:02:20 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:05.655 02:02:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:05.655 02:02:20 -- common/autotest_common.sh@10 -- # set +x 00:05:05.655 POWER: Env isn't set yet! 00:05:05.655 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:05.655 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:05.655 POWER: Cannot set governor of lcore 0 to userspace 00:05:05.655 POWER: Attempting to initialise PSTAT power management... 00:05:05.655 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:05.655 POWER: Cannot set governor of lcore 0 to performance 00:05:05.655 POWER: Attempting to initialise AMD PSTATE power management... 00:05:05.655 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:05.655 POWER: Cannot set governor of lcore 0 to userspace 00:05:05.655 POWER: Attempting to initialise CPPC power management... 00:05:05.655 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:05.655 POWER: Cannot set governor of lcore 0 to userspace 00:05:05.655 POWER: Attempting to initialise VM power management... 00:05:05.655 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:05.655 POWER: Unable to set Power Management Environment for lcore 0 00:05:05.655 [2024-05-14 02:02:20.130845] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:05:05.655 [2024-05-14 02:02:20.130858] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:05:05.655 [2024-05-14 02:02:20.130867] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:05:05.655 02:02:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:05.655 02:02:20 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:05.655 02:02:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:05.655 02:02:20 -- common/autotest_common.sh@10 -- # set +x 00:05:05.655 [2024-05-14 02:02:20.184834] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:05.655 02:02:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:05.655 02:02:20 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:05.655 02:02:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:05.655 02:02:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:05.655 02:02:20 -- common/autotest_common.sh@10 -- # set +x 00:05:05.655 ************************************ 00:05:05.655 START TEST scheduler_create_thread 00:05:05.655 ************************************ 00:05:05.655 02:02:20 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:05:05.655 02:02:20 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:05.655 02:02:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:05.655 02:02:20 -- common/autotest_common.sh@10 -- # set +x 00:05:05.655 2 00:05:05.655 02:02:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:05.655 02:02:20 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:05.655 02:02:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:05.655 02:02:20 -- common/autotest_common.sh@10 -- # set +x 00:05:05.655 3 00:05:05.655 02:02:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:05.655 02:02:20 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:05.655 02:02:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:05.655 02:02:20 -- common/autotest_common.sh@10 -- # set +x 00:05:05.655 4 00:05:05.655 02:02:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:05.655 02:02:20 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:05.655 02:02:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:05.655 02:02:20 -- common/autotest_common.sh@10 -- # set +x 00:05:05.655 5 00:05:05.655 02:02:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:05.655 02:02:20 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:05.655 02:02:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:05.655 02:02:20 -- common/autotest_common.sh@10 -- # set +x 00:05:05.655 6 00:05:05.655 02:02:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:05.655 02:02:20 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:05.655 02:02:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:05.913 02:02:20 -- common/autotest_common.sh@10 -- # set +x 00:05:05.913 7 00:05:05.913 02:02:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:05.913 02:02:20 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:05.913 02:02:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:05.913 02:02:20 -- common/autotest_common.sh@10 -- # set +x 00:05:05.913 8 00:05:05.913 02:02:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:05.913 02:02:20 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:05.913 02:02:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:05.913 02:02:20 -- common/autotest_common.sh@10 -- # set +x 00:05:05.913 9 00:05:05.913 02:02:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:05.913 02:02:20 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:05.913 02:02:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:05.913 02:02:20 -- common/autotest_common.sh@10 -- # set +x 00:05:05.913 10 00:05:05.913 02:02:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:05.913 02:02:20 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:05.913 02:02:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:05.913 02:02:20 -- common/autotest_common.sh@10 -- # set +x 00:05:05.913 02:02:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:05.913 02:02:20 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:05.913 02:02:20 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:05.913 02:02:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:05.913 02:02:20 -- common/autotest_common.sh@10 -- # set +x 00:05:05.913 02:02:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:05.913 02:02:20 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:05.913 02:02:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:05.913 02:02:20 -- common/autotest_common.sh@10 -- # set +x 00:05:07.287 02:02:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:07.287 02:02:21 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:07.287 02:02:21 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:07.287 02:02:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:07.287 02:02:21 -- common/autotest_common.sh@10 -- # set +x 00:05:08.223 02:02:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:08.223 00:05:08.223 real 0m2.612s 00:05:08.224 user 0m0.017s 00:05:08.224 sys 0m0.007s 00:05:08.224 02:02:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.224 ************************************ 00:05:08.224 END TEST scheduler_create_thread 00:05:08.224 ************************************ 00:05:08.224 02:02:22 -- common/autotest_common.sh@10 -- # set +x 00:05:08.482 02:02:22 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:08.483 02:02:22 -- scheduler/scheduler.sh@46 -- # killprocess 56759 00:05:08.483 02:02:22 -- common/autotest_common.sh@926 -- # '[' -z 56759 ']' 00:05:08.483 02:02:22 -- common/autotest_common.sh@930 -- # kill -0 56759 00:05:08.483 02:02:22 -- common/autotest_common.sh@931 -- # uname 00:05:08.483 02:02:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:08.483 02:02:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 56759 00:05:08.483 02:02:22 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:05:08.483 02:02:22 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:05:08.483 killing process with pid 56759 00:05:08.483 02:02:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 56759' 00:05:08.483 02:02:22 -- common/autotest_common.sh@945 -- # kill 56759 00:05:08.483 02:02:22 -- common/autotest_common.sh@950 -- # wait 56759 00:05:08.742 [2024-05-14 02:02:23.284732] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:09.001 00:05:09.001 real 0m4.542s 00:05:09.001 user 0m8.985s 00:05:09.001 sys 0m0.303s 00:05:09.001 02:02:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.001 02:02:23 -- common/autotest_common.sh@10 -- # set +x 00:05:09.001 ************************************ 00:05:09.001 END TEST event_scheduler 00:05:09.001 ************************************ 00:05:09.001 02:02:23 -- event/event.sh@51 -- # modprobe -n nbd 00:05:09.001 02:02:23 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:09.001 02:02:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:09.001 02:02:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:09.001 02:02:23 -- common/autotest_common.sh@10 -- # set +x 00:05:09.001 ************************************ 00:05:09.001 START TEST app_repeat 00:05:09.001 ************************************ 00:05:09.001 02:02:23 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:05:09.001 02:02:23 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.001 02:02:23 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.001 02:02:23 -- event/event.sh@13 -- # local nbd_list 00:05:09.001 02:02:23 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:09.001 02:02:23 -- event/event.sh@14 -- # local bdev_list 00:05:09.001 02:02:23 -- event/event.sh@15 -- # local repeat_times=4 00:05:09.001 02:02:23 -- event/event.sh@17 -- # modprobe nbd 00:05:09.001 02:02:23 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:09.001 02:02:23 -- event/event.sh@19 -- # repeat_pid=56876 00:05:09.001 02:02:23 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:09.001 Process app_repeat pid: 56876 00:05:09.001 02:02:23 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 56876' 00:05:09.001 02:02:23 -- event/event.sh@23 -- # for i in {0..2} 00:05:09.001 spdk_app_start Round 0 00:05:09.001 02:02:23 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:09.001 02:02:23 -- event/event.sh@25 -- # waitforlisten 56876 /var/tmp/spdk-nbd.sock 00:05:09.001 02:02:23 -- common/autotest_common.sh@819 -- # '[' -z 56876 ']' 00:05:09.001 02:02:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:09.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:09.001 02:02:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:09.001 02:02:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:09.001 02:02:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:09.001 02:02:23 -- common/autotest_common.sh@10 -- # set +x 00:05:09.001 [2024-05-14 02:02:23.561950] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:09.001 [2024-05-14 02:02:23.562096] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56876 ] 00:05:09.260 [2024-05-14 02:02:23.709297] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:09.260 [2024-05-14 02:02:23.793928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:09.260 [2024-05-14 02:02:23.793942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.194 02:02:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:10.194 02:02:24 -- common/autotest_common.sh@852 -- # return 0 00:05:10.194 02:02:24 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:10.453 Malloc0 00:05:10.453 02:02:24 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:10.711 Malloc1 00:05:10.711 02:02:25 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:10.711 02:02:25 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.711 02:02:25 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:10.711 02:02:25 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:10.711 02:02:25 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.711 02:02:25 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:10.711 02:02:25 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:10.711 02:02:25 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.711 02:02:25 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:10.711 02:02:25 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:10.711 02:02:25 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.711 02:02:25 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:10.711 02:02:25 -- bdev/nbd_common.sh@12 -- # local i 00:05:10.711 02:02:25 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:10.711 02:02:25 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:10.711 02:02:25 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:10.969 /dev/nbd0 00:05:10.969 02:02:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:10.969 02:02:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:10.969 02:02:25 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:05:10.969 02:02:25 -- common/autotest_common.sh@857 -- # local i 00:05:10.969 02:02:25 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:10.969 02:02:25 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:10.969 02:02:25 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:05:10.969 02:02:25 -- common/autotest_common.sh@861 -- # break 00:05:10.969 02:02:25 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:10.969 02:02:25 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:10.969 02:02:25 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:10.969 1+0 records in 00:05:10.969 1+0 records out 00:05:10.969 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000261888 s, 15.6 MB/s 00:05:10.969 02:02:25 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:10.969 02:02:25 -- common/autotest_common.sh@874 -- # size=4096 00:05:10.970 02:02:25 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:10.970 02:02:25 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:10.970 02:02:25 -- common/autotest_common.sh@877 -- # return 0 00:05:10.970 02:02:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:10.970 02:02:25 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:10.970 02:02:25 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:11.228 /dev/nbd1 00:05:11.228 02:02:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:11.228 02:02:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:11.228 02:02:25 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:05:11.228 02:02:25 -- common/autotest_common.sh@857 -- # local i 00:05:11.228 02:02:25 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:11.228 02:02:25 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:11.228 02:02:25 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:05:11.228 02:02:25 -- common/autotest_common.sh@861 -- # break 00:05:11.228 02:02:25 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:11.228 02:02:25 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:11.228 02:02:25 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:11.228 1+0 records in 00:05:11.228 1+0 records out 00:05:11.228 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00050255 s, 8.2 MB/s 00:05:11.228 02:02:25 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:11.229 02:02:25 -- common/autotest_common.sh@874 -- # size=4096 00:05:11.229 02:02:25 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:11.229 02:02:25 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:11.229 02:02:25 -- common/autotest_common.sh@877 -- # return 0 00:05:11.229 02:02:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:11.229 02:02:25 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:11.229 02:02:25 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:11.229 02:02:25 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.229 02:02:25 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:11.487 02:02:26 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:11.487 { 00:05:11.487 "bdev_name": "Malloc0", 00:05:11.487 "nbd_device": "/dev/nbd0" 00:05:11.487 }, 00:05:11.487 { 00:05:11.487 "bdev_name": "Malloc1", 00:05:11.487 "nbd_device": "/dev/nbd1" 00:05:11.487 } 00:05:11.487 ]' 00:05:11.487 02:02:26 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:11.487 { 00:05:11.487 "bdev_name": "Malloc0", 00:05:11.487 "nbd_device": "/dev/nbd0" 00:05:11.487 }, 00:05:11.487 { 00:05:11.487 "bdev_name": "Malloc1", 00:05:11.487 "nbd_device": "/dev/nbd1" 00:05:11.487 } 00:05:11.487 ]' 00:05:11.487 02:02:26 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:11.746 02:02:26 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:11.746 /dev/nbd1' 00:05:11.746 02:02:26 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:11.746 02:02:26 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:11.746 /dev/nbd1' 00:05:11.746 02:02:26 -- bdev/nbd_common.sh@65 -- # count=2 00:05:11.746 02:02:26 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:11.746 02:02:26 -- bdev/nbd_common.sh@95 -- # count=2 00:05:11.746 02:02:26 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:11.746 02:02:26 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:11.746 02:02:26 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.746 02:02:26 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:11.746 02:02:26 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:11.746 02:02:26 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:11.746 02:02:26 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:11.746 02:02:26 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:11.746 256+0 records in 00:05:11.746 256+0 records out 00:05:11.746 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00752957 s, 139 MB/s 00:05:11.746 02:02:26 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:11.746 02:02:26 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:11.746 256+0 records in 00:05:11.746 256+0 records out 00:05:11.746 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0321442 s, 32.6 MB/s 00:05:11.746 02:02:26 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:11.746 02:02:26 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:11.746 256+0 records in 00:05:11.746 256+0 records out 00:05:11.746 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0325406 s, 32.2 MB/s 00:05:11.746 02:02:26 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:11.746 02:02:26 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.746 02:02:26 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:11.746 02:02:26 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:11.746 02:02:26 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:11.746 02:02:26 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:11.746 02:02:26 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:11.746 02:02:26 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:11.746 02:02:26 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:11.746 02:02:26 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:11.746 02:02:26 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:11.746 02:02:26 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:11.746 02:02:26 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:11.746 02:02:26 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.746 02:02:26 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.746 02:02:26 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:11.746 02:02:26 -- bdev/nbd_common.sh@51 -- # local i 00:05:11.746 02:02:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:11.746 02:02:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:12.004 02:02:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:12.004 02:02:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:12.004 02:02:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:12.004 02:02:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:12.004 02:02:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:12.004 02:02:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:12.004 02:02:26 -- bdev/nbd_common.sh@41 -- # break 00:05:12.004 02:02:26 -- bdev/nbd_common.sh@45 -- # return 0 00:05:12.004 02:02:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:12.004 02:02:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:12.288 02:02:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:12.288 02:02:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:12.288 02:02:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:12.288 02:02:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:12.288 02:02:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:12.288 02:02:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:12.288 02:02:26 -- bdev/nbd_common.sh@41 -- # break 00:05:12.288 02:02:26 -- bdev/nbd_common.sh@45 -- # return 0 00:05:12.288 02:02:26 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:12.288 02:02:26 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.288 02:02:26 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:12.854 02:02:27 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:12.854 02:02:27 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:12.854 02:02:27 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:12.854 02:02:27 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:12.854 02:02:27 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:12.854 02:02:27 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:12.854 02:02:27 -- bdev/nbd_common.sh@65 -- # true 00:05:12.854 02:02:27 -- bdev/nbd_common.sh@65 -- # count=0 00:05:12.854 02:02:27 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:12.854 02:02:27 -- bdev/nbd_common.sh@104 -- # count=0 00:05:12.854 02:02:27 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:12.854 02:02:27 -- bdev/nbd_common.sh@109 -- # return 0 00:05:12.854 02:02:27 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:13.112 02:02:27 -- event/event.sh@35 -- # sleep 3 00:05:13.112 [2024-05-14 02:02:27.649072] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:13.370 [2024-05-14 02:02:27.709886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.370 [2024-05-14 02:02:27.709898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.370 [2024-05-14 02:02:27.742313] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:13.370 [2024-05-14 02:02:27.742390] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:15.898 02:02:30 -- event/event.sh@23 -- # for i in {0..2} 00:05:15.898 spdk_app_start Round 1 00:05:15.898 02:02:30 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:15.898 02:02:30 -- event/event.sh@25 -- # waitforlisten 56876 /var/tmp/spdk-nbd.sock 00:05:15.898 02:02:30 -- common/autotest_common.sh@819 -- # '[' -z 56876 ']' 00:05:15.898 02:02:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:15.898 02:02:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:15.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:15.898 02:02:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:15.898 02:02:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:15.898 02:02:30 -- common/autotest_common.sh@10 -- # set +x 00:05:16.156 02:02:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:16.156 02:02:30 -- common/autotest_common.sh@852 -- # return 0 00:05:16.156 02:02:30 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:16.414 Malloc0 00:05:16.414 02:02:30 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:16.672 Malloc1 00:05:17.013 02:02:31 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:17.013 02:02:31 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.013 02:02:31 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:17.013 02:02:31 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:17.013 02:02:31 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.013 02:02:31 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:17.013 02:02:31 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:17.013 02:02:31 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.013 02:02:31 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:17.013 02:02:31 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:17.013 02:02:31 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.013 02:02:31 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:17.013 02:02:31 -- bdev/nbd_common.sh@12 -- # local i 00:05:17.013 02:02:31 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:17.013 02:02:31 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.013 02:02:31 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:17.013 /dev/nbd0 00:05:17.013 02:02:31 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:17.013 02:02:31 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:17.014 02:02:31 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:05:17.014 02:02:31 -- common/autotest_common.sh@857 -- # local i 00:05:17.014 02:02:31 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:17.014 02:02:31 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:17.014 02:02:31 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:05:17.014 02:02:31 -- common/autotest_common.sh@861 -- # break 00:05:17.014 02:02:31 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:17.014 02:02:31 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:17.014 02:02:31 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:17.014 1+0 records in 00:05:17.014 1+0 records out 00:05:17.014 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000431734 s, 9.5 MB/s 00:05:17.014 02:02:31 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:17.014 02:02:31 -- common/autotest_common.sh@874 -- # size=4096 00:05:17.014 02:02:31 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:17.014 02:02:31 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:17.014 02:02:31 -- common/autotest_common.sh@877 -- # return 0 00:05:17.014 02:02:31 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:17.014 02:02:31 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.014 02:02:31 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:17.272 /dev/nbd1 00:05:17.272 02:02:31 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:17.272 02:02:31 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:17.272 02:02:31 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:05:17.272 02:02:31 -- common/autotest_common.sh@857 -- # local i 00:05:17.272 02:02:31 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:17.272 02:02:31 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:17.272 02:02:31 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:05:17.272 02:02:31 -- common/autotest_common.sh@861 -- # break 00:05:17.272 02:02:31 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:17.272 02:02:31 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:17.272 02:02:31 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:17.272 1+0 records in 00:05:17.272 1+0 records out 00:05:17.272 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000435219 s, 9.4 MB/s 00:05:17.531 02:02:31 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:17.531 02:02:31 -- common/autotest_common.sh@874 -- # size=4096 00:05:17.531 02:02:31 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:17.531 02:02:31 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:17.531 02:02:31 -- common/autotest_common.sh@877 -- # return 0 00:05:17.531 02:02:31 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:17.531 02:02:31 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.531 02:02:31 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:17.531 02:02:31 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.531 02:02:31 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:17.789 02:02:32 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:17.789 { 00:05:17.789 "bdev_name": "Malloc0", 00:05:17.789 "nbd_device": "/dev/nbd0" 00:05:17.789 }, 00:05:17.789 { 00:05:17.789 "bdev_name": "Malloc1", 00:05:17.789 "nbd_device": "/dev/nbd1" 00:05:17.789 } 00:05:17.789 ]' 00:05:17.789 02:02:32 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:17.789 { 00:05:17.789 "bdev_name": "Malloc0", 00:05:17.789 "nbd_device": "/dev/nbd0" 00:05:17.789 }, 00:05:17.789 { 00:05:17.789 "bdev_name": "Malloc1", 00:05:17.789 "nbd_device": "/dev/nbd1" 00:05:17.789 } 00:05:17.789 ]' 00:05:17.789 02:02:32 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:17.789 02:02:32 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:17.789 /dev/nbd1' 00:05:17.789 02:02:32 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:17.789 /dev/nbd1' 00:05:17.789 02:02:32 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:17.789 02:02:32 -- bdev/nbd_common.sh@65 -- # count=2 00:05:17.789 02:02:32 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:17.789 02:02:32 -- bdev/nbd_common.sh@95 -- # count=2 00:05:17.789 02:02:32 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:17.789 02:02:32 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:17.789 02:02:32 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.789 02:02:32 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:17.789 02:02:32 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:17.789 02:02:32 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:17.789 02:02:32 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:17.789 02:02:32 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:17.789 256+0 records in 00:05:17.789 256+0 records out 00:05:17.789 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0078701 s, 133 MB/s 00:05:17.789 02:02:32 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:17.789 02:02:32 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:17.789 256+0 records in 00:05:17.789 256+0 records out 00:05:17.789 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0264878 s, 39.6 MB/s 00:05:17.789 02:02:32 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:17.789 02:02:32 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:17.789 256+0 records in 00:05:17.789 256+0 records out 00:05:17.789 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0281438 s, 37.3 MB/s 00:05:17.789 02:02:32 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:17.789 02:02:32 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.789 02:02:32 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:17.789 02:02:32 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:17.789 02:02:32 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:17.789 02:02:32 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:17.789 02:02:32 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:17.789 02:02:32 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:17.789 02:02:32 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:17.789 02:02:32 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:17.789 02:02:32 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:17.789 02:02:32 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:17.789 02:02:32 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:17.789 02:02:32 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.789 02:02:32 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.789 02:02:32 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:17.789 02:02:32 -- bdev/nbd_common.sh@51 -- # local i 00:05:17.789 02:02:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:17.789 02:02:32 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:18.048 02:02:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:18.048 02:02:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:18.048 02:02:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:18.048 02:02:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:18.048 02:02:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:18.048 02:02:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:18.048 02:02:32 -- bdev/nbd_common.sh@41 -- # break 00:05:18.048 02:02:32 -- bdev/nbd_common.sh@45 -- # return 0 00:05:18.048 02:02:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:18.048 02:02:32 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:18.306 02:02:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:18.306 02:02:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:18.306 02:02:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:18.306 02:02:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:18.306 02:02:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:18.306 02:02:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:18.306 02:02:32 -- bdev/nbd_common.sh@41 -- # break 00:05:18.306 02:02:32 -- bdev/nbd_common.sh@45 -- # return 0 00:05:18.306 02:02:32 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:18.306 02:02:32 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.306 02:02:32 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:18.564 02:02:33 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:18.564 02:02:33 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:18.564 02:02:33 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:18.564 02:02:33 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:18.564 02:02:33 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:18.564 02:02:33 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:18.564 02:02:33 -- bdev/nbd_common.sh@65 -- # true 00:05:18.564 02:02:33 -- bdev/nbd_common.sh@65 -- # count=0 00:05:18.564 02:02:33 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:18.564 02:02:33 -- bdev/nbd_common.sh@104 -- # count=0 00:05:18.564 02:02:33 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:18.564 02:02:33 -- bdev/nbd_common.sh@109 -- # return 0 00:05:18.564 02:02:33 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:18.823 02:02:33 -- event/event.sh@35 -- # sleep 3 00:05:19.091 [2024-05-14 02:02:33.535747] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:19.091 [2024-05-14 02:02:33.593423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:19.091 [2024-05-14 02:02:33.593432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.091 [2024-05-14 02:02:33.623561] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:19.091 [2024-05-14 02:02:33.623622] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:22.375 spdk_app_start Round 2 00:05:22.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:22.375 02:02:36 -- event/event.sh@23 -- # for i in {0..2} 00:05:22.375 02:02:36 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:22.375 02:02:36 -- event/event.sh@25 -- # waitforlisten 56876 /var/tmp/spdk-nbd.sock 00:05:22.375 02:02:36 -- common/autotest_common.sh@819 -- # '[' -z 56876 ']' 00:05:22.375 02:02:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:22.375 02:02:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:22.375 02:02:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:22.375 02:02:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:22.375 02:02:36 -- common/autotest_common.sh@10 -- # set +x 00:05:22.375 02:02:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:22.375 02:02:36 -- common/autotest_common.sh@852 -- # return 0 00:05:22.375 02:02:36 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:22.375 Malloc0 00:05:22.375 02:02:36 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:22.633 Malloc1 00:05:22.890 02:02:37 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:22.890 02:02:37 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.890 02:02:37 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:22.890 02:02:37 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:22.890 02:02:37 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.890 02:02:37 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:22.890 02:02:37 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:22.890 02:02:37 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.890 02:02:37 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:22.890 02:02:37 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:22.890 02:02:37 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.890 02:02:37 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:22.890 02:02:37 -- bdev/nbd_common.sh@12 -- # local i 00:05:22.890 02:02:37 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:22.891 02:02:37 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:22.891 02:02:37 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:22.891 /dev/nbd0 00:05:23.149 02:02:37 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:23.149 02:02:37 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:23.149 02:02:37 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:05:23.149 02:02:37 -- common/autotest_common.sh@857 -- # local i 00:05:23.149 02:02:37 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:23.149 02:02:37 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:23.149 02:02:37 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:05:23.149 02:02:37 -- common/autotest_common.sh@861 -- # break 00:05:23.149 02:02:37 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:23.149 02:02:37 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:23.149 02:02:37 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:23.149 1+0 records in 00:05:23.149 1+0 records out 00:05:23.149 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000383678 s, 10.7 MB/s 00:05:23.149 02:02:37 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:23.149 02:02:37 -- common/autotest_common.sh@874 -- # size=4096 00:05:23.149 02:02:37 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:23.149 02:02:37 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:23.149 02:02:37 -- common/autotest_common.sh@877 -- # return 0 00:05:23.149 02:02:37 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:23.149 02:02:37 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.149 02:02:37 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:23.406 /dev/nbd1 00:05:23.406 02:02:37 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:23.406 02:02:37 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:23.406 02:02:37 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:05:23.406 02:02:37 -- common/autotest_common.sh@857 -- # local i 00:05:23.406 02:02:37 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:23.406 02:02:37 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:23.406 02:02:37 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:05:23.407 02:02:37 -- common/autotest_common.sh@861 -- # break 00:05:23.407 02:02:37 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:23.407 02:02:37 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:23.407 02:02:37 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:23.407 1+0 records in 00:05:23.407 1+0 records out 00:05:23.407 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000307602 s, 13.3 MB/s 00:05:23.407 02:02:37 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:23.407 02:02:37 -- common/autotest_common.sh@874 -- # size=4096 00:05:23.407 02:02:37 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:23.407 02:02:37 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:23.407 02:02:37 -- common/autotest_common.sh@877 -- # return 0 00:05:23.407 02:02:37 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:23.407 02:02:37 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.407 02:02:37 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:23.407 02:02:37 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.407 02:02:37 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:23.665 02:02:38 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:23.665 { 00:05:23.665 "bdev_name": "Malloc0", 00:05:23.665 "nbd_device": "/dev/nbd0" 00:05:23.665 }, 00:05:23.665 { 00:05:23.665 "bdev_name": "Malloc1", 00:05:23.665 "nbd_device": "/dev/nbd1" 00:05:23.665 } 00:05:23.665 ]' 00:05:23.665 02:02:38 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:23.665 { 00:05:23.665 "bdev_name": "Malloc0", 00:05:23.665 "nbd_device": "/dev/nbd0" 00:05:23.665 }, 00:05:23.665 { 00:05:23.665 "bdev_name": "Malloc1", 00:05:23.665 "nbd_device": "/dev/nbd1" 00:05:23.665 } 00:05:23.665 ]' 00:05:23.665 02:02:38 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:23.665 02:02:38 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:23.665 /dev/nbd1' 00:05:23.665 02:02:38 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:23.665 /dev/nbd1' 00:05:23.665 02:02:38 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:23.665 02:02:38 -- bdev/nbd_common.sh@65 -- # count=2 00:05:23.665 02:02:38 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:23.665 02:02:38 -- bdev/nbd_common.sh@95 -- # count=2 00:05:23.665 02:02:38 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:23.665 02:02:38 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:23.665 02:02:38 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.665 02:02:38 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:23.665 02:02:38 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:23.665 02:02:38 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:23.665 02:02:38 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:23.665 02:02:38 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:23.665 256+0 records in 00:05:23.665 256+0 records out 00:05:23.665 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00698696 s, 150 MB/s 00:05:23.665 02:02:38 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:23.665 02:02:38 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:23.665 256+0 records in 00:05:23.665 256+0 records out 00:05:23.665 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0258403 s, 40.6 MB/s 00:05:23.665 02:02:38 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:23.665 02:02:38 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:23.665 256+0 records in 00:05:23.665 256+0 records out 00:05:23.665 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0312968 s, 33.5 MB/s 00:05:23.665 02:02:38 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:23.665 02:02:38 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.665 02:02:38 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:23.665 02:02:38 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:23.665 02:02:38 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:23.665 02:02:38 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:23.665 02:02:38 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:23.665 02:02:38 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:23.665 02:02:38 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:23.665 02:02:38 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:23.665 02:02:38 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:23.665 02:02:38 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:23.924 02:02:38 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:23.924 02:02:38 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.924 02:02:38 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.924 02:02:38 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:23.924 02:02:38 -- bdev/nbd_common.sh@51 -- # local i 00:05:23.924 02:02:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:23.924 02:02:38 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:24.182 02:02:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:24.182 02:02:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:24.182 02:02:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:24.182 02:02:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:24.182 02:02:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:24.182 02:02:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:24.182 02:02:38 -- bdev/nbd_common.sh@41 -- # break 00:05:24.182 02:02:38 -- bdev/nbd_common.sh@45 -- # return 0 00:05:24.182 02:02:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:24.182 02:02:38 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:24.440 02:02:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:24.440 02:02:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:24.440 02:02:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:24.440 02:02:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:24.440 02:02:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:24.440 02:02:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:24.440 02:02:38 -- bdev/nbd_common.sh@41 -- # break 00:05:24.440 02:02:38 -- bdev/nbd_common.sh@45 -- # return 0 00:05:24.440 02:02:38 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:24.440 02:02:38 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.440 02:02:38 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:24.699 02:02:39 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:24.699 02:02:39 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:24.699 02:02:39 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:24.699 02:02:39 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:24.699 02:02:39 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:24.699 02:02:39 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:24.699 02:02:39 -- bdev/nbd_common.sh@65 -- # true 00:05:24.699 02:02:39 -- bdev/nbd_common.sh@65 -- # count=0 00:05:24.699 02:02:39 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:24.699 02:02:39 -- bdev/nbd_common.sh@104 -- # count=0 00:05:24.699 02:02:39 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:24.699 02:02:39 -- bdev/nbd_common.sh@109 -- # return 0 00:05:24.699 02:02:39 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:24.957 02:02:39 -- event/event.sh@35 -- # sleep 3 00:05:24.957 [2024-05-14 02:02:39.511986] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:25.215 [2024-05-14 02:02:39.567732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.215 [2024-05-14 02:02:39.567723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.215 [2024-05-14 02:02:39.596624] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:25.215 [2024-05-14 02:02:39.596680] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:27.795 02:02:42 -- event/event.sh@38 -- # waitforlisten 56876 /var/tmp/spdk-nbd.sock 00:05:27.795 02:02:42 -- common/autotest_common.sh@819 -- # '[' -z 56876 ']' 00:05:27.795 02:02:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:27.795 02:02:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:27.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:27.795 02:02:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:27.795 02:02:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:27.795 02:02:42 -- common/autotest_common.sh@10 -- # set +x 00:05:28.053 02:02:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:28.053 02:02:42 -- common/autotest_common.sh@852 -- # return 0 00:05:28.053 02:02:42 -- event/event.sh@39 -- # killprocess 56876 00:05:28.053 02:02:42 -- common/autotest_common.sh@926 -- # '[' -z 56876 ']' 00:05:28.053 02:02:42 -- common/autotest_common.sh@930 -- # kill -0 56876 00:05:28.053 02:02:42 -- common/autotest_common.sh@931 -- # uname 00:05:28.053 02:02:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:28.053 02:02:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 56876 00:05:28.311 02:02:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:28.311 02:02:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:28.311 killing process with pid 56876 00:05:28.311 02:02:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 56876' 00:05:28.311 02:02:42 -- common/autotest_common.sh@945 -- # kill 56876 00:05:28.311 02:02:42 -- common/autotest_common.sh@950 -- # wait 56876 00:05:28.311 spdk_app_start is called in Round 0. 00:05:28.312 Shutdown signal received, stop current app iteration 00:05:28.312 Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 reinitialization... 00:05:28.312 spdk_app_start is called in Round 1. 00:05:28.312 Shutdown signal received, stop current app iteration 00:05:28.312 Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 reinitialization... 00:05:28.312 spdk_app_start is called in Round 2. 00:05:28.312 Shutdown signal received, stop current app iteration 00:05:28.312 Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 reinitialization... 00:05:28.312 spdk_app_start is called in Round 3. 00:05:28.312 Shutdown signal received, stop current app iteration 00:05:28.312 02:02:42 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:28.312 02:02:42 -- event/event.sh@42 -- # return 0 00:05:28.312 00:05:28.312 real 0m19.280s 00:05:28.312 user 0m43.660s 00:05:28.312 sys 0m2.831s 00:05:28.312 02:02:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.312 02:02:42 -- common/autotest_common.sh@10 -- # set +x 00:05:28.312 ************************************ 00:05:28.312 END TEST app_repeat 00:05:28.312 ************************************ 00:05:28.312 02:02:42 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:28.312 02:02:42 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:28.312 02:02:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:28.312 02:02:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:28.312 02:02:42 -- common/autotest_common.sh@10 -- # set +x 00:05:28.312 ************************************ 00:05:28.312 START TEST cpu_locks 00:05:28.312 ************************************ 00:05:28.312 02:02:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:28.570 * Looking for test storage... 00:05:28.570 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:28.570 02:02:42 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:28.570 02:02:42 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:28.570 02:02:42 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:28.570 02:02:42 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:28.570 02:02:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:28.570 02:02:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:28.570 02:02:42 -- common/autotest_common.sh@10 -- # set +x 00:05:28.570 ************************************ 00:05:28.570 START TEST default_locks 00:05:28.570 ************************************ 00:05:28.570 02:02:42 -- common/autotest_common.sh@1104 -- # default_locks 00:05:28.570 02:02:42 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=57499 00:05:28.570 02:02:42 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:28.570 02:02:42 -- event/cpu_locks.sh@47 -- # waitforlisten 57499 00:05:28.570 02:02:42 -- common/autotest_common.sh@819 -- # '[' -z 57499 ']' 00:05:28.570 02:02:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.570 02:02:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:28.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.570 02:02:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.570 02:02:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:28.570 02:02:42 -- common/autotest_common.sh@10 -- # set +x 00:05:28.570 [2024-05-14 02:02:43.001592] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:28.570 [2024-05-14 02:02:43.001681] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57499 ] 00:05:28.570 [2024-05-14 02:02:43.135332] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.828 [2024-05-14 02:02:43.235864] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:28.828 [2024-05-14 02:02:43.236105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.763 02:02:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:29.763 02:02:44 -- common/autotest_common.sh@852 -- # return 0 00:05:29.763 02:02:44 -- event/cpu_locks.sh@49 -- # locks_exist 57499 00:05:29.763 02:02:44 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:29.763 02:02:44 -- event/cpu_locks.sh@22 -- # lslocks -p 57499 00:05:30.021 02:02:44 -- event/cpu_locks.sh@50 -- # killprocess 57499 00:05:30.021 02:02:44 -- common/autotest_common.sh@926 -- # '[' -z 57499 ']' 00:05:30.021 02:02:44 -- common/autotest_common.sh@930 -- # kill -0 57499 00:05:30.021 02:02:44 -- common/autotest_common.sh@931 -- # uname 00:05:30.021 02:02:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:30.021 02:02:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57499 00:05:30.021 02:02:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:30.021 02:02:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:30.021 killing process with pid 57499 00:05:30.021 02:02:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57499' 00:05:30.021 02:02:44 -- common/autotest_common.sh@945 -- # kill 57499 00:05:30.021 02:02:44 -- common/autotest_common.sh@950 -- # wait 57499 00:05:30.280 02:02:44 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 57499 00:05:30.280 02:02:44 -- common/autotest_common.sh@640 -- # local es=0 00:05:30.280 02:02:44 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 57499 00:05:30.280 02:02:44 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:05:30.280 02:02:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:30.280 02:02:44 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:05:30.280 02:02:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:30.280 02:02:44 -- common/autotest_common.sh@643 -- # waitforlisten 57499 00:05:30.280 02:02:44 -- common/autotest_common.sh@819 -- # '[' -z 57499 ']' 00:05:30.280 02:02:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.280 02:02:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:30.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.280 02:02:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.280 02:02:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:30.280 02:02:44 -- common/autotest_common.sh@10 -- # set +x 00:05:30.280 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (57499) - No such process 00:05:30.280 ERROR: process (pid: 57499) is no longer running 00:05:30.280 02:02:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:30.280 02:02:44 -- common/autotest_common.sh@852 -- # return 1 00:05:30.280 02:02:44 -- common/autotest_common.sh@643 -- # es=1 00:05:30.280 02:02:44 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:30.280 02:02:44 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:30.280 02:02:44 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:30.280 02:02:44 -- event/cpu_locks.sh@54 -- # no_locks 00:05:30.280 02:02:44 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:30.280 02:02:44 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:30.280 02:02:44 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:30.280 00:05:30.280 real 0m1.896s 00:05:30.280 user 0m2.240s 00:05:30.280 sys 0m0.471s 00:05:30.280 02:02:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.280 02:02:44 -- common/autotest_common.sh@10 -- # set +x 00:05:30.280 ************************************ 00:05:30.280 END TEST default_locks 00:05:30.280 ************************************ 00:05:30.547 02:02:44 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:30.547 02:02:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:30.547 02:02:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:30.547 02:02:44 -- common/autotest_common.sh@10 -- # set +x 00:05:30.547 ************************************ 00:05:30.547 START TEST default_locks_via_rpc 00:05:30.547 ************************************ 00:05:30.547 02:02:44 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:05:30.547 02:02:44 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=57563 00:05:30.547 02:02:44 -- event/cpu_locks.sh@63 -- # waitforlisten 57563 00:05:30.547 02:02:44 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:30.547 02:02:44 -- common/autotest_common.sh@819 -- # '[' -z 57563 ']' 00:05:30.547 02:02:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.547 02:02:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:30.547 02:02:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.547 02:02:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:30.547 02:02:44 -- common/autotest_common.sh@10 -- # set +x 00:05:30.547 [2024-05-14 02:02:44.956436] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:30.547 [2024-05-14 02:02:44.956533] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57563 ] 00:05:30.547 [2024-05-14 02:02:45.091084] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.822 [2024-05-14 02:02:45.159953] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:30.822 [2024-05-14 02:02:45.160141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.389 02:02:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:31.389 02:02:45 -- common/autotest_common.sh@852 -- # return 0 00:05:31.389 02:02:45 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:31.389 02:02:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:31.389 02:02:45 -- common/autotest_common.sh@10 -- # set +x 00:05:31.389 02:02:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:31.389 02:02:45 -- event/cpu_locks.sh@67 -- # no_locks 00:05:31.389 02:02:45 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:31.389 02:02:45 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:31.389 02:02:45 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:31.389 02:02:45 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:31.389 02:02:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:31.389 02:02:45 -- common/autotest_common.sh@10 -- # set +x 00:05:31.389 02:02:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:31.389 02:02:45 -- event/cpu_locks.sh@71 -- # locks_exist 57563 00:05:31.389 02:02:45 -- event/cpu_locks.sh@22 -- # lslocks -p 57563 00:05:31.389 02:02:45 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:31.957 02:02:46 -- event/cpu_locks.sh@73 -- # killprocess 57563 00:05:31.957 02:02:46 -- common/autotest_common.sh@926 -- # '[' -z 57563 ']' 00:05:31.957 02:02:46 -- common/autotest_common.sh@930 -- # kill -0 57563 00:05:31.957 02:02:46 -- common/autotest_common.sh@931 -- # uname 00:05:31.957 02:02:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:31.957 02:02:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57563 00:05:31.957 02:02:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:31.957 02:02:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:31.957 killing process with pid 57563 00:05:31.957 02:02:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57563' 00:05:31.957 02:02:46 -- common/autotest_common.sh@945 -- # kill 57563 00:05:31.957 02:02:46 -- common/autotest_common.sh@950 -- # wait 57563 00:05:32.214 00:05:32.214 real 0m1.789s 00:05:32.214 user 0m2.015s 00:05:32.214 sys 0m0.475s 00:05:32.214 02:02:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.214 02:02:46 -- common/autotest_common.sh@10 -- # set +x 00:05:32.214 ************************************ 00:05:32.214 END TEST default_locks_via_rpc 00:05:32.214 ************************************ 00:05:32.214 02:02:46 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:32.214 02:02:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:32.214 02:02:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:32.214 02:02:46 -- common/autotest_common.sh@10 -- # set +x 00:05:32.214 ************************************ 00:05:32.214 START TEST non_locking_app_on_locked_coremask 00:05:32.214 ************************************ 00:05:32.214 02:02:46 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:05:32.214 02:02:46 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=57632 00:05:32.214 02:02:46 -- event/cpu_locks.sh@81 -- # waitforlisten 57632 /var/tmp/spdk.sock 00:05:32.214 02:02:46 -- common/autotest_common.sh@819 -- # '[' -z 57632 ']' 00:05:32.214 02:02:46 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:32.214 02:02:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.214 02:02:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:32.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.214 02:02:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.214 02:02:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:32.214 02:02:46 -- common/autotest_common.sh@10 -- # set +x 00:05:32.214 [2024-05-14 02:02:46.789389] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:32.214 [2024-05-14 02:02:46.789480] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57632 ] 00:05:32.472 [2024-05-14 02:02:46.921286] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.472 [2024-05-14 02:02:46.989018] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:32.472 [2024-05-14 02:02:46.989237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.406 02:02:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:33.406 02:02:47 -- common/autotest_common.sh@852 -- # return 0 00:05:33.406 02:02:47 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=57660 00:05:33.406 02:02:47 -- event/cpu_locks.sh@85 -- # waitforlisten 57660 /var/tmp/spdk2.sock 00:05:33.406 02:02:47 -- common/autotest_common.sh@819 -- # '[' -z 57660 ']' 00:05:33.406 02:02:47 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:33.406 02:02:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:33.406 02:02:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:33.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:33.406 02:02:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:33.406 02:02:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:33.406 02:02:47 -- common/autotest_common.sh@10 -- # set +x 00:05:33.406 [2024-05-14 02:02:47.852452] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:33.406 [2024-05-14 02:02:47.852601] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57660 ] 00:05:33.664 [2024-05-14 02:02:48.007268] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:33.664 [2024-05-14 02:02:48.007321] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.664 [2024-05-14 02:02:48.135688] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:33.664 [2024-05-14 02:02:48.135920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.231 02:02:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:34.231 02:02:48 -- common/autotest_common.sh@852 -- # return 0 00:05:34.231 02:02:48 -- event/cpu_locks.sh@87 -- # locks_exist 57632 00:05:34.231 02:02:48 -- event/cpu_locks.sh@22 -- # lslocks -p 57632 00:05:34.231 02:02:48 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:35.176 02:02:49 -- event/cpu_locks.sh@89 -- # killprocess 57632 00:05:35.176 02:02:49 -- common/autotest_common.sh@926 -- # '[' -z 57632 ']' 00:05:35.177 02:02:49 -- common/autotest_common.sh@930 -- # kill -0 57632 00:05:35.177 02:02:49 -- common/autotest_common.sh@931 -- # uname 00:05:35.177 02:02:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:35.177 02:02:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57632 00:05:35.177 02:02:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:35.177 killing process with pid 57632 00:05:35.177 02:02:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:35.177 02:02:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57632' 00:05:35.177 02:02:49 -- common/autotest_common.sh@945 -- # kill 57632 00:05:35.177 02:02:49 -- common/autotest_common.sh@950 -- # wait 57632 00:05:35.744 02:02:50 -- event/cpu_locks.sh@90 -- # killprocess 57660 00:05:35.744 02:02:50 -- common/autotest_common.sh@926 -- # '[' -z 57660 ']' 00:05:35.744 02:02:50 -- common/autotest_common.sh@930 -- # kill -0 57660 00:05:35.744 02:02:50 -- common/autotest_common.sh@931 -- # uname 00:05:35.744 02:02:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:35.744 02:02:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57660 00:05:35.744 02:02:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:35.744 killing process with pid 57660 00:05:35.744 02:02:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:35.744 02:02:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57660' 00:05:35.744 02:02:50 -- common/autotest_common.sh@945 -- # kill 57660 00:05:35.744 02:02:50 -- common/autotest_common.sh@950 -- # wait 57660 00:05:36.001 00:05:36.001 real 0m3.781s 00:05:36.001 user 0m4.497s 00:05:36.001 sys 0m0.892s 00:05:36.001 02:02:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.001 02:02:50 -- common/autotest_common.sh@10 -- # set +x 00:05:36.001 ************************************ 00:05:36.001 END TEST non_locking_app_on_locked_coremask 00:05:36.001 ************************************ 00:05:36.001 02:02:50 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:36.001 02:02:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:36.001 02:02:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:36.001 02:02:50 -- common/autotest_common.sh@10 -- # set +x 00:05:36.002 ************************************ 00:05:36.002 START TEST locking_app_on_unlocked_coremask 00:05:36.002 ************************************ 00:05:36.002 02:02:50 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:05:36.002 02:02:50 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=57739 00:05:36.002 02:02:50 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:36.002 02:02:50 -- event/cpu_locks.sh@99 -- # waitforlisten 57739 /var/tmp/spdk.sock 00:05:36.002 02:02:50 -- common/autotest_common.sh@819 -- # '[' -z 57739 ']' 00:05:36.002 02:02:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.002 02:02:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:36.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.002 02:02:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.002 02:02:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:36.002 02:02:50 -- common/autotest_common.sh@10 -- # set +x 00:05:36.259 [2024-05-14 02:02:50.623371] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:36.259 [2024-05-14 02:02:50.623473] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57739 ] 00:05:36.259 [2024-05-14 02:02:50.758448] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:36.259 [2024-05-14 02:02:50.758500] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.259 [2024-05-14 02:02:50.814577] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:36.259 [2024-05-14 02:02:50.814734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.193 02:02:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:37.193 02:02:51 -- common/autotest_common.sh@852 -- # return 0 00:05:37.193 02:02:51 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=57767 00:05:37.193 02:02:51 -- event/cpu_locks.sh@103 -- # waitforlisten 57767 /var/tmp/spdk2.sock 00:05:37.193 02:02:51 -- common/autotest_common.sh@819 -- # '[' -z 57767 ']' 00:05:37.193 02:02:51 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:37.193 02:02:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:37.193 02:02:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:37.193 02:02:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:37.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:37.193 02:02:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:37.193 02:02:51 -- common/autotest_common.sh@10 -- # set +x 00:05:37.193 [2024-05-14 02:02:51.674407] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:37.193 [2024-05-14 02:02:51.674499] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57767 ] 00:05:37.451 [2024-05-14 02:02:51.818586] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.451 [2024-05-14 02:02:51.933510] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:37.451 [2024-05-14 02:02:51.933666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.385 02:02:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:38.385 02:02:52 -- common/autotest_common.sh@852 -- # return 0 00:05:38.385 02:02:52 -- event/cpu_locks.sh@105 -- # locks_exist 57767 00:05:38.385 02:02:52 -- event/cpu_locks.sh@22 -- # lslocks -p 57767 00:05:38.385 02:02:52 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:39.320 02:02:53 -- event/cpu_locks.sh@107 -- # killprocess 57739 00:05:39.320 02:02:53 -- common/autotest_common.sh@926 -- # '[' -z 57739 ']' 00:05:39.320 02:02:53 -- common/autotest_common.sh@930 -- # kill -0 57739 00:05:39.320 02:02:53 -- common/autotest_common.sh@931 -- # uname 00:05:39.320 02:02:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:39.320 02:02:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57739 00:05:39.320 02:02:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:39.320 02:02:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:39.320 killing process with pid 57739 00:05:39.320 02:02:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57739' 00:05:39.320 02:02:53 -- common/autotest_common.sh@945 -- # kill 57739 00:05:39.320 02:02:53 -- common/autotest_common.sh@950 -- # wait 57739 00:05:39.591 02:02:54 -- event/cpu_locks.sh@108 -- # killprocess 57767 00:05:39.591 02:02:54 -- common/autotest_common.sh@926 -- # '[' -z 57767 ']' 00:05:39.591 02:02:54 -- common/autotest_common.sh@930 -- # kill -0 57767 00:05:39.591 02:02:54 -- common/autotest_common.sh@931 -- # uname 00:05:39.591 02:02:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:39.591 02:02:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57767 00:05:39.591 02:02:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:39.591 02:02:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:39.591 killing process with pid 57767 00:05:39.591 02:02:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57767' 00:05:39.591 02:02:54 -- common/autotest_common.sh@945 -- # kill 57767 00:05:39.591 02:02:54 -- common/autotest_common.sh@950 -- # wait 57767 00:05:39.885 00:05:39.885 real 0m3.865s 00:05:39.885 user 0m4.636s 00:05:39.885 sys 0m0.910s 00:05:39.885 02:02:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.885 02:02:54 -- common/autotest_common.sh@10 -- # set +x 00:05:39.885 ************************************ 00:05:39.885 END TEST locking_app_on_unlocked_coremask 00:05:39.885 ************************************ 00:05:39.885 02:02:54 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:39.885 02:02:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:39.885 02:02:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:39.885 02:02:54 -- common/autotest_common.sh@10 -- # set +x 00:05:39.885 ************************************ 00:05:39.885 START TEST locking_app_on_locked_coremask 00:05:39.885 ************************************ 00:05:39.885 02:02:54 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:05:39.885 02:02:54 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=57841 00:05:39.885 02:02:54 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:39.885 02:02:54 -- event/cpu_locks.sh@116 -- # waitforlisten 57841 /var/tmp/spdk.sock 00:05:39.885 02:02:54 -- common/autotest_common.sh@819 -- # '[' -z 57841 ']' 00:05:39.885 02:02:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.885 02:02:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:39.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.885 02:02:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.885 02:02:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:39.885 02:02:54 -- common/autotest_common.sh@10 -- # set +x 00:05:40.144 [2024-05-14 02:02:54.527962] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:40.144 [2024-05-14 02:02:54.528053] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57841 ] 00:05:40.144 [2024-05-14 02:02:54.658186] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.402 [2024-05-14 02:02:54.741403] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:40.402 [2024-05-14 02:02:54.741616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.967 02:02:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:40.967 02:02:55 -- common/autotest_common.sh@852 -- # return 0 00:05:40.967 02:02:55 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=57873 00:05:40.967 02:02:55 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 57873 /var/tmp/spdk2.sock 00:05:40.967 02:02:55 -- common/autotest_common.sh@640 -- # local es=0 00:05:40.967 02:02:55 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:40.967 02:02:55 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 57873 /var/tmp/spdk2.sock 00:05:40.967 02:02:55 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:05:40.967 02:02:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:40.967 02:02:55 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:05:40.967 02:02:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:40.967 02:02:55 -- common/autotest_common.sh@643 -- # waitforlisten 57873 /var/tmp/spdk2.sock 00:05:40.967 02:02:55 -- common/autotest_common.sh@819 -- # '[' -z 57873 ']' 00:05:40.967 02:02:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:40.967 02:02:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:40.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:40.967 02:02:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:40.967 02:02:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:40.967 02:02:55 -- common/autotest_common.sh@10 -- # set +x 00:05:41.225 [2024-05-14 02:02:55.606982] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:41.225 [2024-05-14 02:02:55.607084] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57873 ] 00:05:41.225 [2024-05-14 02:02:55.752286] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 57841 has claimed it. 00:05:41.225 [2024-05-14 02:02:55.752362] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:41.792 ERROR: process (pid: 57873) is no longer running 00:05:41.792 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (57873) - No such process 00:05:41.792 02:02:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:41.792 02:02:56 -- common/autotest_common.sh@852 -- # return 1 00:05:41.792 02:02:56 -- common/autotest_common.sh@643 -- # es=1 00:05:41.792 02:02:56 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:41.792 02:02:56 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:41.792 02:02:56 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:41.792 02:02:56 -- event/cpu_locks.sh@122 -- # locks_exist 57841 00:05:41.792 02:02:56 -- event/cpu_locks.sh@22 -- # lslocks -p 57841 00:05:41.792 02:02:56 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:42.359 02:02:56 -- event/cpu_locks.sh@124 -- # killprocess 57841 00:05:42.359 02:02:56 -- common/autotest_common.sh@926 -- # '[' -z 57841 ']' 00:05:42.359 02:02:56 -- common/autotest_common.sh@930 -- # kill -0 57841 00:05:42.359 02:02:56 -- common/autotest_common.sh@931 -- # uname 00:05:42.359 02:02:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:42.359 02:02:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57841 00:05:42.359 02:02:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:42.359 killing process with pid 57841 00:05:42.359 02:02:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:42.359 02:02:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57841' 00:05:42.359 02:02:56 -- common/autotest_common.sh@945 -- # kill 57841 00:05:42.359 02:02:56 -- common/autotest_common.sh@950 -- # wait 57841 00:05:42.618 00:05:42.618 real 0m2.636s 00:05:42.618 user 0m3.250s 00:05:42.618 sys 0m0.556s 00:05:42.618 02:02:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.618 02:02:57 -- common/autotest_common.sh@10 -- # set +x 00:05:42.618 ************************************ 00:05:42.618 END TEST locking_app_on_locked_coremask 00:05:42.618 ************************************ 00:05:42.618 02:02:57 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:42.618 02:02:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:42.618 02:02:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:42.618 02:02:57 -- common/autotest_common.sh@10 -- # set +x 00:05:42.618 ************************************ 00:05:42.618 START TEST locking_overlapped_coremask 00:05:42.618 ************************************ 00:05:42.618 02:02:57 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:05:42.618 02:02:57 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=57920 00:05:42.618 02:02:57 -- event/cpu_locks.sh@133 -- # waitforlisten 57920 /var/tmp/spdk.sock 00:05:42.618 02:02:57 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:42.618 02:02:57 -- common/autotest_common.sh@819 -- # '[' -z 57920 ']' 00:05:42.618 02:02:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.618 02:02:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:42.618 02:02:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.618 02:02:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:42.618 02:02:57 -- common/autotest_common.sh@10 -- # set +x 00:05:42.877 [2024-05-14 02:02:57.220343] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:42.877 [2024-05-14 02:02:57.220465] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57920 ] 00:05:42.877 [2024-05-14 02:02:57.357471] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:42.877 [2024-05-14 02:02:57.426419] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:42.877 [2024-05-14 02:02:57.426875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.877 [2024-05-14 02:02:57.426936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:42.877 [2024-05-14 02:02:57.426939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.812 02:02:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:43.812 02:02:58 -- common/autotest_common.sh@852 -- # return 0 00:05:43.812 02:02:58 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=57950 00:05:43.812 02:02:58 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 57950 /var/tmp/spdk2.sock 00:05:43.812 02:02:58 -- common/autotest_common.sh@640 -- # local es=0 00:05:43.812 02:02:58 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 57950 /var/tmp/spdk2.sock 00:05:43.812 02:02:58 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:05:43.812 02:02:58 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:43.812 02:02:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:43.812 02:02:58 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:05:43.812 02:02:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:43.812 02:02:58 -- common/autotest_common.sh@643 -- # waitforlisten 57950 /var/tmp/spdk2.sock 00:05:43.812 02:02:58 -- common/autotest_common.sh@819 -- # '[' -z 57950 ']' 00:05:43.812 02:02:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:43.812 02:02:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:43.812 02:02:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:43.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:43.812 02:02:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:43.812 02:02:58 -- common/autotest_common.sh@10 -- # set +x 00:05:43.812 [2024-05-14 02:02:58.242275] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:43.812 [2024-05-14 02:02:58.242369] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57950 ] 00:05:43.812 [2024-05-14 02:02:58.383680] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 57920 has claimed it. 00:05:43.812 [2024-05-14 02:02:58.383758] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:44.378 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (57950) - No such process 00:05:44.378 ERROR: process (pid: 57950) is no longer running 00:05:44.379 02:02:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:44.379 02:02:58 -- common/autotest_common.sh@852 -- # return 1 00:05:44.379 02:02:58 -- common/autotest_common.sh@643 -- # es=1 00:05:44.379 02:02:58 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:44.379 02:02:58 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:44.379 02:02:58 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:44.379 02:02:58 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:44.379 02:02:58 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:44.379 02:02:58 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:44.379 02:02:58 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:44.688 02:02:58 -- event/cpu_locks.sh@141 -- # killprocess 57920 00:05:44.688 02:02:58 -- common/autotest_common.sh@926 -- # '[' -z 57920 ']' 00:05:44.688 02:02:58 -- common/autotest_common.sh@930 -- # kill -0 57920 00:05:44.688 02:02:58 -- common/autotest_common.sh@931 -- # uname 00:05:44.688 02:02:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:44.688 02:02:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57920 00:05:44.688 killing process with pid 57920 00:05:44.688 02:02:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:44.688 02:02:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:44.688 02:02:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57920' 00:05:44.688 02:02:58 -- common/autotest_common.sh@945 -- # kill 57920 00:05:44.688 02:02:58 -- common/autotest_common.sh@950 -- # wait 57920 00:05:44.688 ************************************ 00:05:44.688 END TEST locking_overlapped_coremask 00:05:44.688 ************************************ 00:05:44.688 00:05:44.688 real 0m2.116s 00:05:44.688 user 0m5.998s 00:05:44.688 sys 0m0.321s 00:05:44.688 02:02:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.688 02:02:59 -- common/autotest_common.sh@10 -- # set +x 00:05:44.958 02:02:59 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:44.958 02:02:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:44.958 02:02:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:44.958 02:02:59 -- common/autotest_common.sh@10 -- # set +x 00:05:44.958 ************************************ 00:05:44.958 START TEST locking_overlapped_coremask_via_rpc 00:05:44.958 ************************************ 00:05:44.958 02:02:59 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:05:44.958 02:02:59 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=57996 00:05:44.958 02:02:59 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:44.958 02:02:59 -- event/cpu_locks.sh@149 -- # waitforlisten 57996 /var/tmp/spdk.sock 00:05:44.958 02:02:59 -- common/autotest_common.sh@819 -- # '[' -z 57996 ']' 00:05:44.958 02:02:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.958 02:02:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:44.958 02:02:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.958 02:02:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:44.958 02:02:59 -- common/autotest_common.sh@10 -- # set +x 00:05:44.958 [2024-05-14 02:02:59.385676] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:44.958 [2024-05-14 02:02:59.385798] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57996 ] 00:05:44.958 [2024-05-14 02:02:59.524082] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:44.958 [2024-05-14 02:02:59.524134] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:45.215 [2024-05-14 02:02:59.592232] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:45.215 [2024-05-14 02:02:59.592652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:45.215 [2024-05-14 02:02:59.592746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:45.215 [2024-05-14 02:02:59.592752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.781 02:03:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:45.781 02:03:00 -- common/autotest_common.sh@852 -- # return 0 00:05:45.781 02:03:00 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=58026 00:05:45.781 02:03:00 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:45.781 02:03:00 -- event/cpu_locks.sh@153 -- # waitforlisten 58026 /var/tmp/spdk2.sock 00:05:45.781 02:03:00 -- common/autotest_common.sh@819 -- # '[' -z 58026 ']' 00:05:45.781 02:03:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:45.781 02:03:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:45.781 02:03:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:45.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:45.781 02:03:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:45.781 02:03:00 -- common/autotest_common.sh@10 -- # set +x 00:05:46.039 [2024-05-14 02:03:00.402945] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:46.039 [2024-05-14 02:03:00.403084] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58026 ] 00:05:46.039 [2024-05-14 02:03:00.560389] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:46.039 [2024-05-14 02:03:00.560454] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:46.297 [2024-05-14 02:03:00.680555] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:46.297 [2024-05-14 02:03:00.680804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:46.297 [2024-05-14 02:03:00.683852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:46.297 [2024-05-14 02:03:00.683857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:46.861 02:03:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:46.861 02:03:01 -- common/autotest_common.sh@852 -- # return 0 00:05:46.861 02:03:01 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:46.861 02:03:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:46.861 02:03:01 -- common/autotest_common.sh@10 -- # set +x 00:05:46.861 02:03:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:46.861 02:03:01 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:46.861 02:03:01 -- common/autotest_common.sh@640 -- # local es=0 00:05:46.861 02:03:01 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:46.861 02:03:01 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:05:46.861 02:03:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:46.861 02:03:01 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:05:46.861 02:03:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:46.861 02:03:01 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:46.861 02:03:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:46.861 02:03:01 -- common/autotest_common.sh@10 -- # set +x 00:05:46.861 [2024-05-14 02:03:01.425914] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 57996 has claimed it. 00:05:46.861 2024/05/14 02:03:01 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:05:46.861 request: 00:05:46.861 { 00:05:46.861 "method": "framework_enable_cpumask_locks", 00:05:46.861 "params": {} 00:05:46.861 } 00:05:46.861 Got JSON-RPC error response 00:05:46.861 GoRPCClient: error on JSON-RPC call 00:05:46.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.861 02:03:01 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:05:46.861 02:03:01 -- common/autotest_common.sh@643 -- # es=1 00:05:46.861 02:03:01 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:46.861 02:03:01 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:46.861 02:03:01 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:46.861 02:03:01 -- event/cpu_locks.sh@158 -- # waitforlisten 57996 /var/tmp/spdk.sock 00:05:46.861 02:03:01 -- common/autotest_common.sh@819 -- # '[' -z 57996 ']' 00:05:46.861 02:03:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.861 02:03:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:46.861 02:03:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.861 02:03:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:46.861 02:03:01 -- common/autotest_common.sh@10 -- # set +x 00:05:47.427 02:03:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:47.427 02:03:01 -- common/autotest_common.sh@852 -- # return 0 00:05:47.427 02:03:01 -- event/cpu_locks.sh@159 -- # waitforlisten 58026 /var/tmp/spdk2.sock 00:05:47.427 02:03:01 -- common/autotest_common.sh@819 -- # '[' -z 58026 ']' 00:05:47.427 02:03:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:47.427 02:03:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:47.427 02:03:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:47.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:47.427 02:03:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:47.427 02:03:01 -- common/autotest_common.sh@10 -- # set +x 00:05:47.685 02:03:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:47.685 02:03:02 -- common/autotest_common.sh@852 -- # return 0 00:05:47.685 02:03:02 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:47.685 02:03:02 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:47.685 02:03:02 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:47.685 02:03:02 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:47.685 00:05:47.685 real 0m2.737s 00:05:47.685 user 0m1.464s 00:05:47.685 sys 0m0.195s 00:05:47.685 02:03:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.685 02:03:02 -- common/autotest_common.sh@10 -- # set +x 00:05:47.685 ************************************ 00:05:47.685 END TEST locking_overlapped_coremask_via_rpc 00:05:47.685 ************************************ 00:05:47.685 02:03:02 -- event/cpu_locks.sh@174 -- # cleanup 00:05:47.685 02:03:02 -- event/cpu_locks.sh@15 -- # [[ -z 57996 ]] 00:05:47.685 02:03:02 -- event/cpu_locks.sh@15 -- # killprocess 57996 00:05:47.685 02:03:02 -- common/autotest_common.sh@926 -- # '[' -z 57996 ']' 00:05:47.685 02:03:02 -- common/autotest_common.sh@930 -- # kill -0 57996 00:05:47.685 02:03:02 -- common/autotest_common.sh@931 -- # uname 00:05:47.685 02:03:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:47.685 02:03:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57996 00:05:47.685 02:03:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:47.685 02:03:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:47.685 02:03:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57996' 00:05:47.685 killing process with pid 57996 00:05:47.685 02:03:02 -- common/autotest_common.sh@945 -- # kill 57996 00:05:47.685 02:03:02 -- common/autotest_common.sh@950 -- # wait 57996 00:05:47.943 02:03:02 -- event/cpu_locks.sh@16 -- # [[ -z 58026 ]] 00:05:47.943 02:03:02 -- event/cpu_locks.sh@16 -- # killprocess 58026 00:05:47.943 02:03:02 -- common/autotest_common.sh@926 -- # '[' -z 58026 ']' 00:05:47.944 02:03:02 -- common/autotest_common.sh@930 -- # kill -0 58026 00:05:47.944 02:03:02 -- common/autotest_common.sh@931 -- # uname 00:05:47.944 02:03:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:47.944 02:03:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58026 00:05:47.944 02:03:02 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:05:47.944 02:03:02 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:05:47.944 02:03:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58026' 00:05:47.944 killing process with pid 58026 00:05:47.944 02:03:02 -- common/autotest_common.sh@945 -- # kill 58026 00:05:47.944 02:03:02 -- common/autotest_common.sh@950 -- # wait 58026 00:05:48.202 02:03:02 -- event/cpu_locks.sh@18 -- # rm -f 00:05:48.202 02:03:02 -- event/cpu_locks.sh@1 -- # cleanup 00:05:48.202 02:03:02 -- event/cpu_locks.sh@15 -- # [[ -z 57996 ]] 00:05:48.202 02:03:02 -- event/cpu_locks.sh@15 -- # killprocess 57996 00:05:48.202 02:03:02 -- common/autotest_common.sh@926 -- # '[' -z 57996 ']' 00:05:48.202 02:03:02 -- common/autotest_common.sh@930 -- # kill -0 57996 00:05:48.202 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (57996) - No such process 00:05:48.202 Process with pid 57996 is not found 00:05:48.202 02:03:02 -- common/autotest_common.sh@953 -- # echo 'Process with pid 57996 is not found' 00:05:48.202 02:03:02 -- event/cpu_locks.sh@16 -- # [[ -z 58026 ]] 00:05:48.202 02:03:02 -- event/cpu_locks.sh@16 -- # killprocess 58026 00:05:48.202 02:03:02 -- common/autotest_common.sh@926 -- # '[' -z 58026 ']' 00:05:48.202 02:03:02 -- common/autotest_common.sh@930 -- # kill -0 58026 00:05:48.202 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (58026) - No such process 00:05:48.202 Process with pid 58026 is not found 00:05:48.202 02:03:02 -- common/autotest_common.sh@953 -- # echo 'Process with pid 58026 is not found' 00:05:48.202 02:03:02 -- event/cpu_locks.sh@18 -- # rm -f 00:05:48.202 00:05:48.202 real 0m19.841s 00:05:48.202 user 0m36.444s 00:05:48.202 sys 0m4.460s 00:05:48.202 02:03:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.202 02:03:02 -- common/autotest_common.sh@10 -- # set +x 00:05:48.202 ************************************ 00:05:48.202 END TEST cpu_locks 00:05:48.202 ************************************ 00:05:48.202 00:05:48.202 real 0m47.975s 00:05:48.202 user 1m35.677s 00:05:48.202 sys 0m7.943s 00:05:48.202 02:03:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.202 02:03:02 -- common/autotest_common.sh@10 -- # set +x 00:05:48.202 ************************************ 00:05:48.202 END TEST event 00:05:48.202 ************************************ 00:05:48.202 02:03:02 -- spdk/autotest.sh@188 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:48.202 02:03:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:48.202 02:03:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:48.202 02:03:02 -- common/autotest_common.sh@10 -- # set +x 00:05:48.460 ************************************ 00:05:48.460 START TEST thread 00:05:48.460 ************************************ 00:05:48.460 02:03:02 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:48.460 * Looking for test storage... 00:05:48.460 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:48.460 02:03:02 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:48.460 02:03:02 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:05:48.460 02:03:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:48.460 02:03:02 -- common/autotest_common.sh@10 -- # set +x 00:05:48.460 ************************************ 00:05:48.460 START TEST thread_poller_perf 00:05:48.460 ************************************ 00:05:48.460 02:03:02 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:48.460 [2024-05-14 02:03:02.891259] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:48.460 [2024-05-14 02:03:02.891353] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58176 ] 00:05:48.460 [2024-05-14 02:03:03.028226] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.719 [2024-05-14 02:03:03.097573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.719 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:49.652 ====================================== 00:05:49.652 busy:2211295912 (cyc) 00:05:49.652 total_run_count: 269000 00:05:49.652 tsc_hz: 2200000000 (cyc) 00:05:49.652 ====================================== 00:05:49.652 poller_cost: 8220 (cyc), 3736 (nsec) 00:05:49.652 00:05:49.652 real 0m1.327s 00:05:49.652 user 0m1.172s 00:05:49.652 sys 0m0.045s 00:05:49.652 02:03:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.652 02:03:04 -- common/autotest_common.sh@10 -- # set +x 00:05:49.652 ************************************ 00:05:49.652 END TEST thread_poller_perf 00:05:49.652 ************************************ 00:05:49.652 02:03:04 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:49.652 02:03:04 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:05:49.652 02:03:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:49.652 02:03:04 -- common/autotest_common.sh@10 -- # set +x 00:05:49.911 ************************************ 00:05:49.911 START TEST thread_poller_perf 00:05:49.911 ************************************ 00:05:49.911 02:03:04 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:49.911 [2024-05-14 02:03:04.266832] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:49.911 [2024-05-14 02:03:04.266932] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58207 ] 00:05:49.911 [2024-05-14 02:03:04.406493] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.911 [2024-05-14 02:03:04.462845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.911 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:51.283 ====================================== 00:05:51.283 busy:2202729086 (cyc) 00:05:51.283 total_run_count: 4013000 00:05:51.283 tsc_hz: 2200000000 (cyc) 00:05:51.283 ====================================== 00:05:51.283 poller_cost: 548 (cyc), 249 (nsec) 00:05:51.283 00:05:51.283 real 0m1.306s 00:05:51.284 user 0m1.146s 00:05:51.284 sys 0m0.052s 00:05:51.284 02:03:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.284 02:03:05 -- common/autotest_common.sh@10 -- # set +x 00:05:51.284 ************************************ 00:05:51.284 END TEST thread_poller_perf 00:05:51.284 ************************************ 00:05:51.284 02:03:05 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:51.284 00:05:51.284 real 0m2.801s 00:05:51.284 user 0m2.370s 00:05:51.284 sys 0m0.205s 00:05:51.284 02:03:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.284 02:03:05 -- common/autotest_common.sh@10 -- # set +x 00:05:51.284 ************************************ 00:05:51.284 END TEST thread 00:05:51.284 ************************************ 00:05:51.284 02:03:05 -- spdk/autotest.sh@189 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:51.284 02:03:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:51.284 02:03:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:51.284 02:03:05 -- common/autotest_common.sh@10 -- # set +x 00:05:51.284 ************************************ 00:05:51.284 START TEST accel 00:05:51.284 ************************************ 00:05:51.284 02:03:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:51.284 * Looking for test storage... 00:05:51.284 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:05:51.284 02:03:05 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:05:51.284 02:03:05 -- accel/accel.sh@74 -- # get_expected_opcs 00:05:51.284 02:03:05 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:51.284 02:03:05 -- accel/accel.sh@59 -- # spdk_tgt_pid=58281 00:05:51.284 02:03:05 -- accel/accel.sh@60 -- # waitforlisten 58281 00:05:51.284 02:03:05 -- common/autotest_common.sh@819 -- # '[' -z 58281 ']' 00:05:51.284 02:03:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.284 02:03:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:51.284 02:03:05 -- accel/accel.sh@58 -- # build_accel_config 00:05:51.284 02:03:05 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:51.284 02:03:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.284 02:03:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:51.284 02:03:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:51.284 02:03:05 -- common/autotest_common.sh@10 -- # set +x 00:05:51.284 02:03:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:51.284 02:03:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:51.284 02:03:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:51.284 02:03:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:51.284 02:03:05 -- accel/accel.sh@41 -- # local IFS=, 00:05:51.284 02:03:05 -- accel/accel.sh@42 -- # jq -r . 00:05:51.284 [2024-05-14 02:03:05.763931] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:51.284 [2024-05-14 02:03:05.764029] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58281 ] 00:05:51.541 [2024-05-14 02:03:05.897443] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.541 [2024-05-14 02:03:05.954414] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:51.542 [2024-05-14 02:03:05.954570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.474 02:03:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:52.474 02:03:06 -- common/autotest_common.sh@852 -- # return 0 00:05:52.474 02:03:06 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:52.474 02:03:06 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:05:52.474 02:03:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:52.474 02:03:06 -- common/autotest_common.sh@10 -- # set +x 00:05:52.474 02:03:06 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:52.474 02:03:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:52.474 02:03:06 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:52.474 02:03:06 -- accel/accel.sh@64 -- # IFS== 00:05:52.474 02:03:06 -- accel/accel.sh@64 -- # read -r opc module 00:05:52.474 02:03:06 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:52.474 02:03:06 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:52.474 02:03:06 -- accel/accel.sh@64 -- # IFS== 00:05:52.474 02:03:06 -- accel/accel.sh@64 -- # read -r opc module 00:05:52.474 02:03:06 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:52.474 02:03:06 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:52.474 02:03:06 -- accel/accel.sh@64 -- # IFS== 00:05:52.474 02:03:06 -- accel/accel.sh@64 -- # read -r opc module 00:05:52.474 02:03:06 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:52.474 02:03:06 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:52.474 02:03:06 -- accel/accel.sh@64 -- # IFS== 00:05:52.474 02:03:06 -- accel/accel.sh@64 -- # read -r opc module 00:05:52.474 02:03:06 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:52.474 02:03:06 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:52.474 02:03:06 -- accel/accel.sh@64 -- # IFS== 00:05:52.474 02:03:06 -- accel/accel.sh@64 -- # read -r opc module 00:05:52.474 02:03:06 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:52.474 02:03:06 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:52.474 02:03:06 -- accel/accel.sh@64 -- # IFS== 00:05:52.474 02:03:06 -- accel/accel.sh@64 -- # read -r opc module 00:05:52.474 02:03:06 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:52.475 02:03:06 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:52.475 02:03:06 -- accel/accel.sh@64 -- # IFS== 00:05:52.475 02:03:06 -- accel/accel.sh@64 -- # read -r opc module 00:05:52.475 02:03:06 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:52.475 02:03:06 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:52.475 02:03:06 -- accel/accel.sh@64 -- # IFS== 00:05:52.475 02:03:06 -- accel/accel.sh@64 -- # read -r opc module 00:05:52.475 02:03:06 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:52.475 02:03:06 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:52.475 02:03:06 -- accel/accel.sh@64 -- # IFS== 00:05:52.475 02:03:06 -- accel/accel.sh@64 -- # read -r opc module 00:05:52.475 02:03:06 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:52.475 02:03:06 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:52.475 02:03:06 -- accel/accel.sh@64 -- # IFS== 00:05:52.475 02:03:06 -- accel/accel.sh@64 -- # read -r opc module 00:05:52.475 02:03:06 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:52.475 02:03:06 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:52.475 02:03:06 -- accel/accel.sh@64 -- # IFS== 00:05:52.475 02:03:06 -- accel/accel.sh@64 -- # read -r opc module 00:05:52.475 02:03:06 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:52.475 02:03:06 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:52.475 02:03:06 -- accel/accel.sh@64 -- # IFS== 00:05:52.475 02:03:06 -- accel/accel.sh@64 -- # read -r opc module 00:05:52.475 02:03:06 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:52.475 02:03:06 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:52.475 02:03:06 -- accel/accel.sh@64 -- # IFS== 00:05:52.475 02:03:06 -- accel/accel.sh@64 -- # read -r opc module 00:05:52.475 02:03:06 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:52.475 02:03:06 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:52.475 02:03:06 -- accel/accel.sh@64 -- # IFS== 00:05:52.475 02:03:06 -- accel/accel.sh@64 -- # read -r opc module 00:05:52.475 02:03:06 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:52.475 02:03:06 -- accel/accel.sh@67 -- # killprocess 58281 00:05:52.475 02:03:06 -- common/autotest_common.sh@926 -- # '[' -z 58281 ']' 00:05:52.475 02:03:06 -- common/autotest_common.sh@930 -- # kill -0 58281 00:05:52.475 02:03:06 -- common/autotest_common.sh@931 -- # uname 00:05:52.475 02:03:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:52.475 02:03:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58281 00:05:52.475 02:03:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:52.475 02:03:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:52.475 killing process with pid 58281 00:05:52.475 02:03:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58281' 00:05:52.475 02:03:06 -- common/autotest_common.sh@945 -- # kill 58281 00:05:52.475 02:03:06 -- common/autotest_common.sh@950 -- # wait 58281 00:05:52.733 02:03:07 -- accel/accel.sh@68 -- # trap - ERR 00:05:52.733 02:03:07 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:05:52.733 02:03:07 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:05:52.733 02:03:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:52.733 02:03:07 -- common/autotest_common.sh@10 -- # set +x 00:05:52.733 02:03:07 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:05:52.733 02:03:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:52.733 02:03:07 -- accel/accel.sh@12 -- # build_accel_config 00:05:52.733 02:03:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:52.733 02:03:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:52.733 02:03:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:52.734 02:03:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:52.734 02:03:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:52.734 02:03:07 -- accel/accel.sh@41 -- # local IFS=, 00:05:52.734 02:03:07 -- accel/accel.sh@42 -- # jq -r . 00:05:52.734 02:03:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.734 02:03:07 -- common/autotest_common.sh@10 -- # set +x 00:05:52.734 02:03:07 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:52.734 02:03:07 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:52.734 02:03:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:52.734 02:03:07 -- common/autotest_common.sh@10 -- # set +x 00:05:52.734 ************************************ 00:05:52.734 START TEST accel_missing_filename 00:05:52.734 ************************************ 00:05:52.734 02:03:07 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:05:52.734 02:03:07 -- common/autotest_common.sh@640 -- # local es=0 00:05:52.734 02:03:07 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:52.734 02:03:07 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:05:52.734 02:03:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:52.734 02:03:07 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:05:52.734 02:03:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:52.734 02:03:07 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:05:52.734 02:03:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:52.734 02:03:07 -- accel/accel.sh@12 -- # build_accel_config 00:05:52.734 02:03:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:52.734 02:03:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:52.734 02:03:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:52.734 02:03:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:52.734 02:03:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:52.734 02:03:07 -- accel/accel.sh@41 -- # local IFS=, 00:05:52.734 02:03:07 -- accel/accel.sh@42 -- # jq -r . 00:05:52.734 [2024-05-14 02:03:07.215897] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:52.734 [2024-05-14 02:03:07.216522] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58350 ] 00:05:52.992 [2024-05-14 02:03:07.351552] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.992 [2024-05-14 02:03:07.409551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.992 [2024-05-14 02:03:07.439020] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:52.992 [2024-05-14 02:03:07.478281] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:05:52.992 A filename is required. 00:05:52.992 02:03:07 -- common/autotest_common.sh@643 -- # es=234 00:05:52.992 02:03:07 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:52.992 02:03:07 -- common/autotest_common.sh@652 -- # es=106 00:05:52.992 02:03:07 -- common/autotest_common.sh@653 -- # case "$es" in 00:05:52.992 02:03:07 -- common/autotest_common.sh@660 -- # es=1 00:05:52.992 02:03:07 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:52.992 00:05:52.992 real 0m0.381s 00:05:52.992 user 0m0.255s 00:05:52.992 sys 0m0.075s 00:05:52.992 02:03:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.992 02:03:07 -- common/autotest_common.sh@10 -- # set +x 00:05:52.992 ************************************ 00:05:52.992 END TEST accel_missing_filename 00:05:52.992 ************************************ 00:05:53.250 02:03:07 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:53.250 02:03:07 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:05:53.250 02:03:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:53.250 02:03:07 -- common/autotest_common.sh@10 -- # set +x 00:05:53.250 ************************************ 00:05:53.250 START TEST accel_compress_verify 00:05:53.250 ************************************ 00:05:53.250 02:03:07 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:53.250 02:03:07 -- common/autotest_common.sh@640 -- # local es=0 00:05:53.250 02:03:07 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:53.250 02:03:07 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:05:53.250 02:03:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:53.250 02:03:07 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:05:53.250 02:03:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:53.250 02:03:07 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:53.250 02:03:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:53.250 02:03:07 -- accel/accel.sh@12 -- # build_accel_config 00:05:53.250 02:03:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:53.250 02:03:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:53.250 02:03:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:53.250 02:03:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:53.250 02:03:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:53.250 02:03:07 -- accel/accel.sh@41 -- # local IFS=, 00:05:53.250 02:03:07 -- accel/accel.sh@42 -- # jq -r . 00:05:53.250 [2024-05-14 02:03:07.639801] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:53.250 [2024-05-14 02:03:07.639904] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58375 ] 00:05:53.250 [2024-05-14 02:03:07.774683] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.508 [2024-05-14 02:03:07.841447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.508 [2024-05-14 02:03:07.874944] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:53.508 [2024-05-14 02:03:07.917434] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:05:53.508 00:05:53.508 Compression does not support the verify option, aborting. 00:05:53.508 02:03:08 -- common/autotest_common.sh@643 -- # es=161 00:05:53.508 02:03:08 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:53.508 02:03:08 -- common/autotest_common.sh@652 -- # es=33 00:05:53.508 02:03:08 -- common/autotest_common.sh@653 -- # case "$es" in 00:05:53.508 02:03:08 -- common/autotest_common.sh@660 -- # es=1 00:05:53.508 02:03:08 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:53.508 00:05:53.508 real 0m0.413s 00:05:53.508 user 0m0.280s 00:05:53.508 sys 0m0.076s 00:05:53.508 02:03:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.508 02:03:08 -- common/autotest_common.sh@10 -- # set +x 00:05:53.508 ************************************ 00:05:53.508 END TEST accel_compress_verify 00:05:53.508 ************************************ 00:05:53.508 02:03:08 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:53.508 02:03:08 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:53.508 02:03:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:53.508 02:03:08 -- common/autotest_common.sh@10 -- # set +x 00:05:53.508 ************************************ 00:05:53.508 START TEST accel_wrong_workload 00:05:53.508 ************************************ 00:05:53.508 02:03:08 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:05:53.508 02:03:08 -- common/autotest_common.sh@640 -- # local es=0 00:05:53.508 02:03:08 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:53.508 02:03:08 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:05:53.508 02:03:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:53.508 02:03:08 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:05:53.508 02:03:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:53.508 02:03:08 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:05:53.508 02:03:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:53.508 02:03:08 -- accel/accel.sh@12 -- # build_accel_config 00:05:53.508 02:03:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:53.508 02:03:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:53.508 02:03:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:53.508 02:03:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:53.508 02:03:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:53.508 02:03:08 -- accel/accel.sh@41 -- # local IFS=, 00:05:53.508 02:03:08 -- accel/accel.sh@42 -- # jq -r . 00:05:53.767 Unsupported workload type: foobar 00:05:53.767 [2024-05-14 02:03:08.102330] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:53.767 accel_perf options: 00:05:53.767 [-h help message] 00:05:53.767 [-q queue depth per core] 00:05:53.767 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:53.767 [-T number of threads per core 00:05:53.767 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:53.767 [-t time in seconds] 00:05:53.767 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:53.767 [ dif_verify, , dif_generate, dif_generate_copy 00:05:53.767 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:53.767 [-l for compress/decompress workloads, name of uncompressed input file 00:05:53.767 [-S for crc32c workload, use this seed value (default 0) 00:05:53.767 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:53.767 [-f for fill workload, use this BYTE value (default 255) 00:05:53.767 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:53.767 [-y verify result if this switch is on] 00:05:53.767 [-a tasks to allocate per core (default: same value as -q)] 00:05:53.767 Can be used to spread operations across a wider range of memory. 00:05:53.767 02:03:08 -- common/autotest_common.sh@643 -- # es=1 00:05:53.767 02:03:08 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:53.767 02:03:08 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:53.767 02:03:08 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:53.767 00:05:53.767 real 0m0.035s 00:05:53.767 user 0m0.020s 00:05:53.767 sys 0m0.015s 00:05:53.767 ************************************ 00:05:53.767 END TEST accel_wrong_workload 00:05:53.767 ************************************ 00:05:53.767 02:03:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.767 02:03:08 -- common/autotest_common.sh@10 -- # set +x 00:05:53.767 02:03:08 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:53.767 02:03:08 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:05:53.767 02:03:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:53.767 02:03:08 -- common/autotest_common.sh@10 -- # set +x 00:05:53.767 ************************************ 00:05:53.767 START TEST accel_negative_buffers 00:05:53.767 ************************************ 00:05:53.767 02:03:08 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:53.767 02:03:08 -- common/autotest_common.sh@640 -- # local es=0 00:05:53.767 02:03:08 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:53.767 02:03:08 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:05:53.767 02:03:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:53.767 02:03:08 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:05:53.767 02:03:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:53.767 02:03:08 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:05:53.767 02:03:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:53.767 02:03:08 -- accel/accel.sh@12 -- # build_accel_config 00:05:53.767 02:03:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:53.767 02:03:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:53.767 02:03:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:53.767 02:03:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:53.767 02:03:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:53.767 02:03:08 -- accel/accel.sh@41 -- # local IFS=, 00:05:53.767 02:03:08 -- accel/accel.sh@42 -- # jq -r . 00:05:53.767 -x option must be non-negative. 00:05:53.767 [2024-05-14 02:03:08.172207] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:53.767 accel_perf options: 00:05:53.767 [-h help message] 00:05:53.767 [-q queue depth per core] 00:05:53.767 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:53.767 [-T number of threads per core 00:05:53.767 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:53.767 [-t time in seconds] 00:05:53.767 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:53.767 [ dif_verify, , dif_generate, dif_generate_copy 00:05:53.767 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:53.767 [-l for compress/decompress workloads, name of uncompressed input file 00:05:53.767 [-S for crc32c workload, use this seed value (default 0) 00:05:53.767 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:53.767 [-f for fill workload, use this BYTE value (default 255) 00:05:53.767 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:53.767 [-y verify result if this switch is on] 00:05:53.767 [-a tasks to allocate per core (default: same value as -q)] 00:05:53.767 Can be used to spread operations across a wider range of memory. 00:05:53.767 02:03:08 -- common/autotest_common.sh@643 -- # es=1 00:05:53.767 02:03:08 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:53.767 02:03:08 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:53.767 02:03:08 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:53.767 00:05:53.767 real 0m0.024s 00:05:53.767 user 0m0.019s 00:05:53.767 sys 0m0.006s 00:05:53.767 02:03:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.767 02:03:08 -- common/autotest_common.sh@10 -- # set +x 00:05:53.767 ************************************ 00:05:53.767 END TEST accel_negative_buffers 00:05:53.767 ************************************ 00:05:53.767 02:03:08 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:53.767 02:03:08 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:05:53.767 02:03:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:53.767 02:03:08 -- common/autotest_common.sh@10 -- # set +x 00:05:53.767 ************************************ 00:05:53.767 START TEST accel_crc32c 00:05:53.767 ************************************ 00:05:53.767 02:03:08 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:53.767 02:03:08 -- accel/accel.sh@16 -- # local accel_opc 00:05:53.767 02:03:08 -- accel/accel.sh@17 -- # local accel_module 00:05:53.767 02:03:08 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:53.767 02:03:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:53.767 02:03:08 -- accel/accel.sh@12 -- # build_accel_config 00:05:53.767 02:03:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:53.767 02:03:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:53.767 02:03:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:53.767 02:03:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:53.767 02:03:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:53.767 02:03:08 -- accel/accel.sh@41 -- # local IFS=, 00:05:53.767 02:03:08 -- accel/accel.sh@42 -- # jq -r . 00:05:53.767 [2024-05-14 02:03:08.236079] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:53.767 [2024-05-14 02:03:08.236170] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58433 ] 00:05:54.026 [2024-05-14 02:03:08.373439] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.026 [2024-05-14 02:03:08.445361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.398 02:03:09 -- accel/accel.sh@18 -- # out=' 00:05:55.398 SPDK Configuration: 00:05:55.398 Core mask: 0x1 00:05:55.398 00:05:55.398 Accel Perf Configuration: 00:05:55.398 Workload Type: crc32c 00:05:55.398 CRC-32C seed: 32 00:05:55.398 Transfer size: 4096 bytes 00:05:55.398 Vector count 1 00:05:55.398 Module: software 00:05:55.398 Queue depth: 32 00:05:55.398 Allocate depth: 32 00:05:55.398 # threads/core: 1 00:05:55.398 Run time: 1 seconds 00:05:55.398 Verify: Yes 00:05:55.398 00:05:55.398 Running for 1 seconds... 00:05:55.398 00:05:55.398 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:55.398 ------------------------------------------------------------------------------------ 00:05:55.398 0,0 409120/s 1598 MiB/s 0 0 00:05:55.398 ==================================================================================== 00:05:55.398 Total 409120/s 1598 MiB/s 0 0' 00:05:55.398 02:03:09 -- accel/accel.sh@20 -- # IFS=: 00:05:55.398 02:03:09 -- accel/accel.sh@20 -- # read -r var val 00:05:55.398 02:03:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:55.398 02:03:09 -- accel/accel.sh@12 -- # build_accel_config 00:05:55.398 02:03:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:55.398 02:03:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:55.398 02:03:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:55.398 02:03:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:55.398 02:03:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:55.398 02:03:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:55.398 02:03:09 -- accel/accel.sh@41 -- # local IFS=, 00:05:55.398 02:03:09 -- accel/accel.sh@42 -- # jq -r . 00:05:55.398 [2024-05-14 02:03:09.654095] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:55.398 [2024-05-14 02:03:09.654625] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58447 ] 00:05:55.398 [2024-05-14 02:03:09.788017] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.398 [2024-05-14 02:03:09.846413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.398 02:03:09 -- accel/accel.sh@21 -- # val= 00:05:55.398 02:03:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.398 02:03:09 -- accel/accel.sh@20 -- # IFS=: 00:05:55.398 02:03:09 -- accel/accel.sh@20 -- # read -r var val 00:05:55.398 02:03:09 -- accel/accel.sh@21 -- # val= 00:05:55.398 02:03:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.398 02:03:09 -- accel/accel.sh@20 -- # IFS=: 00:05:55.398 02:03:09 -- accel/accel.sh@20 -- # read -r var val 00:05:55.398 02:03:09 -- accel/accel.sh@21 -- # val=0x1 00:05:55.398 02:03:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.398 02:03:09 -- accel/accel.sh@20 -- # IFS=: 00:05:55.398 02:03:09 -- accel/accel.sh@20 -- # read -r var val 00:05:55.398 02:03:09 -- accel/accel.sh@21 -- # val= 00:05:55.398 02:03:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.398 02:03:09 -- accel/accel.sh@20 -- # IFS=: 00:05:55.398 02:03:09 -- accel/accel.sh@20 -- # read -r var val 00:05:55.398 02:03:09 -- accel/accel.sh@21 -- # val= 00:05:55.398 02:03:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.398 02:03:09 -- accel/accel.sh@20 -- # IFS=: 00:05:55.398 02:03:09 -- accel/accel.sh@20 -- # read -r var val 00:05:55.399 02:03:09 -- accel/accel.sh@21 -- # val=crc32c 00:05:55.399 02:03:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.399 02:03:09 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:05:55.399 02:03:09 -- accel/accel.sh@20 -- # IFS=: 00:05:55.399 02:03:09 -- accel/accel.sh@20 -- # read -r var val 00:05:55.399 02:03:09 -- accel/accel.sh@21 -- # val=32 00:05:55.399 02:03:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.399 02:03:09 -- accel/accel.sh@20 -- # IFS=: 00:05:55.399 02:03:09 -- accel/accel.sh@20 -- # read -r var val 00:05:55.399 02:03:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:55.399 02:03:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.399 02:03:09 -- accel/accel.sh@20 -- # IFS=: 00:05:55.399 02:03:09 -- accel/accel.sh@20 -- # read -r var val 00:05:55.399 02:03:09 -- accel/accel.sh@21 -- # val= 00:05:55.399 02:03:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.399 02:03:09 -- accel/accel.sh@20 -- # IFS=: 00:05:55.399 02:03:09 -- accel/accel.sh@20 -- # read -r var val 00:05:55.399 02:03:09 -- accel/accel.sh@21 -- # val=software 00:05:55.399 02:03:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.399 02:03:09 -- accel/accel.sh@23 -- # accel_module=software 00:05:55.399 02:03:09 -- accel/accel.sh@20 -- # IFS=: 00:05:55.399 02:03:09 -- accel/accel.sh@20 -- # read -r var val 00:05:55.399 02:03:09 -- accel/accel.sh@21 -- # val=32 00:05:55.399 02:03:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.399 02:03:09 -- accel/accel.sh@20 -- # IFS=: 00:05:55.399 02:03:09 -- accel/accel.sh@20 -- # read -r var val 00:05:55.399 02:03:09 -- accel/accel.sh@21 -- # val=32 00:05:55.399 02:03:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.399 02:03:09 -- accel/accel.sh@20 -- # IFS=: 00:05:55.399 02:03:09 -- accel/accel.sh@20 -- # read -r var val 00:05:55.399 02:03:09 -- accel/accel.sh@21 -- # val=1 00:05:55.399 02:03:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.399 02:03:09 -- accel/accel.sh@20 -- # IFS=: 00:05:55.399 02:03:09 -- accel/accel.sh@20 -- # read -r var val 00:05:55.399 02:03:09 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:55.399 02:03:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.399 02:03:09 -- accel/accel.sh@20 -- # IFS=: 00:05:55.399 02:03:09 -- accel/accel.sh@20 -- # read -r var val 00:05:55.399 02:03:09 -- accel/accel.sh@21 -- # val=Yes 00:05:55.399 02:03:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.399 02:03:09 -- accel/accel.sh@20 -- # IFS=: 00:05:55.399 02:03:09 -- accel/accel.sh@20 -- # read -r var val 00:05:55.399 02:03:09 -- accel/accel.sh@21 -- # val= 00:05:55.399 02:03:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.399 02:03:09 -- accel/accel.sh@20 -- # IFS=: 00:05:55.399 02:03:09 -- accel/accel.sh@20 -- # read -r var val 00:05:55.399 02:03:09 -- accel/accel.sh@21 -- # val= 00:05:55.399 02:03:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.399 02:03:09 -- accel/accel.sh@20 -- # IFS=: 00:05:55.399 02:03:09 -- accel/accel.sh@20 -- # read -r var val 00:05:56.772 02:03:11 -- accel/accel.sh@21 -- # val= 00:05:56.772 02:03:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.772 02:03:11 -- accel/accel.sh@20 -- # IFS=: 00:05:56.772 02:03:11 -- accel/accel.sh@20 -- # read -r var val 00:05:56.772 02:03:11 -- accel/accel.sh@21 -- # val= 00:05:56.772 02:03:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.772 02:03:11 -- accel/accel.sh@20 -- # IFS=: 00:05:56.772 02:03:11 -- accel/accel.sh@20 -- # read -r var val 00:05:56.772 02:03:11 -- accel/accel.sh@21 -- # val= 00:05:56.772 02:03:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.772 02:03:11 -- accel/accel.sh@20 -- # IFS=: 00:05:56.772 02:03:11 -- accel/accel.sh@20 -- # read -r var val 00:05:56.772 02:03:11 -- accel/accel.sh@21 -- # val= 00:05:56.772 02:03:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.772 02:03:11 -- accel/accel.sh@20 -- # IFS=: 00:05:56.772 02:03:11 -- accel/accel.sh@20 -- # read -r var val 00:05:56.772 02:03:11 -- accel/accel.sh@21 -- # val= 00:05:56.772 02:03:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.772 02:03:11 -- accel/accel.sh@20 -- # IFS=: 00:05:56.772 02:03:11 -- accel/accel.sh@20 -- # read -r var val 00:05:56.772 02:03:11 -- accel/accel.sh@21 -- # val= 00:05:56.772 02:03:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.772 02:03:11 -- accel/accel.sh@20 -- # IFS=: 00:05:56.772 02:03:11 -- accel/accel.sh@20 -- # read -r var val 00:05:56.772 02:03:11 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:56.772 02:03:11 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:05:56.772 02:03:11 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:56.772 00:05:56.772 real 0m2.800s 00:05:56.772 user 0m2.443s 00:05:56.772 sys 0m0.152s 00:05:56.772 02:03:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.772 02:03:11 -- common/autotest_common.sh@10 -- # set +x 00:05:56.772 ************************************ 00:05:56.772 END TEST accel_crc32c 00:05:56.772 ************************************ 00:05:56.772 02:03:11 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:56.772 02:03:11 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:05:56.772 02:03:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:56.772 02:03:11 -- common/autotest_common.sh@10 -- # set +x 00:05:56.772 ************************************ 00:05:56.772 START TEST accel_crc32c_C2 00:05:56.772 ************************************ 00:05:56.772 02:03:11 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:56.772 02:03:11 -- accel/accel.sh@16 -- # local accel_opc 00:05:56.772 02:03:11 -- accel/accel.sh@17 -- # local accel_module 00:05:56.772 02:03:11 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:56.772 02:03:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:56.772 02:03:11 -- accel/accel.sh@12 -- # build_accel_config 00:05:56.772 02:03:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:56.772 02:03:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.772 02:03:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.772 02:03:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:56.772 02:03:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:56.772 02:03:11 -- accel/accel.sh@41 -- # local IFS=, 00:05:56.772 02:03:11 -- accel/accel.sh@42 -- # jq -r . 00:05:56.772 [2024-05-14 02:03:11.084589] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:56.772 [2024-05-14 02:03:11.084688] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58482 ] 00:05:56.772 [2024-05-14 02:03:11.220572] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.772 [2024-05-14 02:03:11.287226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.146 02:03:12 -- accel/accel.sh@18 -- # out=' 00:05:58.146 SPDK Configuration: 00:05:58.146 Core mask: 0x1 00:05:58.146 00:05:58.146 Accel Perf Configuration: 00:05:58.146 Workload Type: crc32c 00:05:58.146 CRC-32C seed: 0 00:05:58.146 Transfer size: 4096 bytes 00:05:58.146 Vector count 2 00:05:58.146 Module: software 00:05:58.146 Queue depth: 32 00:05:58.146 Allocate depth: 32 00:05:58.146 # threads/core: 1 00:05:58.146 Run time: 1 seconds 00:05:58.146 Verify: Yes 00:05:58.146 00:05:58.146 Running for 1 seconds... 00:05:58.146 00:05:58.147 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:58.147 ------------------------------------------------------------------------------------ 00:05:58.147 0,0 326912/s 2554 MiB/s 0 0 00:05:58.147 ==================================================================================== 00:05:58.147 Total 326912/s 1277 MiB/s 0 0' 00:05:58.147 02:03:12 -- accel/accel.sh@20 -- # IFS=: 00:05:58.147 02:03:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:58.147 02:03:12 -- accel/accel.sh@20 -- # read -r var val 00:05:58.147 02:03:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:58.147 02:03:12 -- accel/accel.sh@12 -- # build_accel_config 00:05:58.147 02:03:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:58.147 02:03:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:58.147 02:03:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:58.147 02:03:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:58.147 02:03:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:58.147 02:03:12 -- accel/accel.sh@41 -- # local IFS=, 00:05:58.147 02:03:12 -- accel/accel.sh@42 -- # jq -r . 00:05:58.147 [2024-05-14 02:03:12.474903] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:58.147 [2024-05-14 02:03:12.474977] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58501 ] 00:05:58.147 [2024-05-14 02:03:12.608534] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.147 [2024-05-14 02:03:12.664106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.147 02:03:12 -- accel/accel.sh@21 -- # val= 00:05:58.147 02:03:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.147 02:03:12 -- accel/accel.sh@20 -- # IFS=: 00:05:58.147 02:03:12 -- accel/accel.sh@20 -- # read -r var val 00:05:58.147 02:03:12 -- accel/accel.sh@21 -- # val= 00:05:58.147 02:03:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.147 02:03:12 -- accel/accel.sh@20 -- # IFS=: 00:05:58.147 02:03:12 -- accel/accel.sh@20 -- # read -r var val 00:05:58.147 02:03:12 -- accel/accel.sh@21 -- # val=0x1 00:05:58.147 02:03:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.147 02:03:12 -- accel/accel.sh@20 -- # IFS=: 00:05:58.147 02:03:12 -- accel/accel.sh@20 -- # read -r var val 00:05:58.147 02:03:12 -- accel/accel.sh@21 -- # val= 00:05:58.147 02:03:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.147 02:03:12 -- accel/accel.sh@20 -- # IFS=: 00:05:58.147 02:03:12 -- accel/accel.sh@20 -- # read -r var val 00:05:58.147 02:03:12 -- accel/accel.sh@21 -- # val= 00:05:58.147 02:03:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.147 02:03:12 -- accel/accel.sh@20 -- # IFS=: 00:05:58.147 02:03:12 -- accel/accel.sh@20 -- # read -r var val 00:05:58.147 02:03:12 -- accel/accel.sh@21 -- # val=crc32c 00:05:58.147 02:03:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.147 02:03:12 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:05:58.147 02:03:12 -- accel/accel.sh@20 -- # IFS=: 00:05:58.147 02:03:12 -- accel/accel.sh@20 -- # read -r var val 00:05:58.147 02:03:12 -- accel/accel.sh@21 -- # val=0 00:05:58.147 02:03:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.147 02:03:12 -- accel/accel.sh@20 -- # IFS=: 00:05:58.147 02:03:12 -- accel/accel.sh@20 -- # read -r var val 00:05:58.147 02:03:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:58.147 02:03:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.147 02:03:12 -- accel/accel.sh@20 -- # IFS=: 00:05:58.147 02:03:12 -- accel/accel.sh@20 -- # read -r var val 00:05:58.147 02:03:12 -- accel/accel.sh@21 -- # val= 00:05:58.147 02:03:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.147 02:03:12 -- accel/accel.sh@20 -- # IFS=: 00:05:58.147 02:03:12 -- accel/accel.sh@20 -- # read -r var val 00:05:58.147 02:03:12 -- accel/accel.sh@21 -- # val=software 00:05:58.147 02:03:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.147 02:03:12 -- accel/accel.sh@23 -- # accel_module=software 00:05:58.147 02:03:12 -- accel/accel.sh@20 -- # IFS=: 00:05:58.147 02:03:12 -- accel/accel.sh@20 -- # read -r var val 00:05:58.147 02:03:12 -- accel/accel.sh@21 -- # val=32 00:05:58.147 02:03:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.147 02:03:12 -- accel/accel.sh@20 -- # IFS=: 00:05:58.147 02:03:12 -- accel/accel.sh@20 -- # read -r var val 00:05:58.147 02:03:12 -- accel/accel.sh@21 -- # val=32 00:05:58.147 02:03:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.147 02:03:12 -- accel/accel.sh@20 -- # IFS=: 00:05:58.147 02:03:12 -- accel/accel.sh@20 -- # read -r var val 00:05:58.147 02:03:12 -- accel/accel.sh@21 -- # val=1 00:05:58.147 02:03:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.147 02:03:12 -- accel/accel.sh@20 -- # IFS=: 00:05:58.147 02:03:12 -- accel/accel.sh@20 -- # read -r var val 00:05:58.147 02:03:12 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:58.147 02:03:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.147 02:03:12 -- accel/accel.sh@20 -- # IFS=: 00:05:58.147 02:03:12 -- accel/accel.sh@20 -- # read -r var val 00:05:58.147 02:03:12 -- accel/accel.sh@21 -- # val=Yes 00:05:58.147 02:03:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.147 02:03:12 -- accel/accel.sh@20 -- # IFS=: 00:05:58.147 02:03:12 -- accel/accel.sh@20 -- # read -r var val 00:05:58.147 02:03:12 -- accel/accel.sh@21 -- # val= 00:05:58.147 02:03:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.147 02:03:12 -- accel/accel.sh@20 -- # IFS=: 00:05:58.147 02:03:12 -- accel/accel.sh@20 -- # read -r var val 00:05:58.147 02:03:12 -- accel/accel.sh@21 -- # val= 00:05:58.147 02:03:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.147 02:03:12 -- accel/accel.sh@20 -- # IFS=: 00:05:58.147 02:03:12 -- accel/accel.sh@20 -- # read -r var val 00:05:59.521 02:03:13 -- accel/accel.sh@21 -- # val= 00:05:59.521 02:03:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.521 02:03:13 -- accel/accel.sh@20 -- # IFS=: 00:05:59.521 02:03:13 -- accel/accel.sh@20 -- # read -r var val 00:05:59.521 02:03:13 -- accel/accel.sh@21 -- # val= 00:05:59.521 02:03:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.521 02:03:13 -- accel/accel.sh@20 -- # IFS=: 00:05:59.521 02:03:13 -- accel/accel.sh@20 -- # read -r var val 00:05:59.521 02:03:13 -- accel/accel.sh@21 -- # val= 00:05:59.521 02:03:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.521 02:03:13 -- accel/accel.sh@20 -- # IFS=: 00:05:59.521 02:03:13 -- accel/accel.sh@20 -- # read -r var val 00:05:59.521 02:03:13 -- accel/accel.sh@21 -- # val= 00:05:59.521 02:03:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.521 02:03:13 -- accel/accel.sh@20 -- # IFS=: 00:05:59.521 02:03:13 -- accel/accel.sh@20 -- # read -r var val 00:05:59.521 02:03:13 -- accel/accel.sh@21 -- # val= 00:05:59.521 02:03:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.521 02:03:13 -- accel/accel.sh@20 -- # IFS=: 00:05:59.521 02:03:13 -- accel/accel.sh@20 -- # read -r var val 00:05:59.521 02:03:13 -- accel/accel.sh@21 -- # val= 00:05:59.521 02:03:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.521 02:03:13 -- accel/accel.sh@20 -- # IFS=: 00:05:59.521 02:03:13 -- accel/accel.sh@20 -- # read -r var val 00:05:59.521 02:03:13 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:59.521 02:03:13 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:05:59.521 02:03:13 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:59.521 00:05:59.521 real 0m2.768s 00:05:59.521 user 0m2.419s 00:05:59.521 sys 0m0.142s 00:05:59.521 02:03:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.521 ************************************ 00:05:59.521 END TEST accel_crc32c_C2 00:05:59.522 ************************************ 00:05:59.522 02:03:13 -- common/autotest_common.sh@10 -- # set +x 00:05:59.522 02:03:13 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:59.522 02:03:13 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:59.522 02:03:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:59.522 02:03:13 -- common/autotest_common.sh@10 -- # set +x 00:05:59.522 ************************************ 00:05:59.522 START TEST accel_copy 00:05:59.522 ************************************ 00:05:59.522 02:03:13 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:05:59.522 02:03:13 -- accel/accel.sh@16 -- # local accel_opc 00:05:59.522 02:03:13 -- accel/accel.sh@17 -- # local accel_module 00:05:59.522 02:03:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:05:59.522 02:03:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:59.522 02:03:13 -- accel/accel.sh@12 -- # build_accel_config 00:05:59.522 02:03:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:59.522 02:03:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:59.522 02:03:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:59.522 02:03:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:59.522 02:03:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:59.522 02:03:13 -- accel/accel.sh@41 -- # local IFS=, 00:05:59.522 02:03:13 -- accel/accel.sh@42 -- # jq -r . 00:05:59.522 [2024-05-14 02:03:13.898243] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:59.522 [2024-05-14 02:03:13.898314] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58536 ] 00:05:59.522 [2024-05-14 02:03:14.031330] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.522 [2024-05-14 02:03:14.092067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.897 02:03:15 -- accel/accel.sh@18 -- # out=' 00:06:00.897 SPDK Configuration: 00:06:00.897 Core mask: 0x1 00:06:00.897 00:06:00.897 Accel Perf Configuration: 00:06:00.897 Workload Type: copy 00:06:00.897 Transfer size: 4096 bytes 00:06:00.897 Vector count 1 00:06:00.897 Module: software 00:06:00.897 Queue depth: 32 00:06:00.897 Allocate depth: 32 00:06:00.897 # threads/core: 1 00:06:00.897 Run time: 1 seconds 00:06:00.897 Verify: Yes 00:06:00.897 00:06:00.897 Running for 1 seconds... 00:06:00.897 00:06:00.897 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:00.897 ------------------------------------------------------------------------------------ 00:06:00.897 0,0 299232/s 1168 MiB/s 0 0 00:06:00.897 ==================================================================================== 00:06:00.897 Total 299232/s 1168 MiB/s 0 0' 00:06:00.897 02:03:15 -- accel/accel.sh@20 -- # IFS=: 00:06:00.897 02:03:15 -- accel/accel.sh@20 -- # read -r var val 00:06:00.897 02:03:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:00.897 02:03:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:00.897 02:03:15 -- accel/accel.sh@12 -- # build_accel_config 00:06:00.897 02:03:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:00.897 02:03:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:00.897 02:03:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:00.897 02:03:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:00.897 02:03:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:00.897 02:03:15 -- accel/accel.sh@41 -- # local IFS=, 00:06:00.897 02:03:15 -- accel/accel.sh@42 -- # jq -r . 00:06:00.897 [2024-05-14 02:03:15.291559] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:00.897 [2024-05-14 02:03:15.291712] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58550 ] 00:06:00.897 [2024-05-14 02:03:15.441520] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.156 [2024-05-14 02:03:15.508989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.156 02:03:15 -- accel/accel.sh@21 -- # val= 00:06:01.156 02:03:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.156 02:03:15 -- accel/accel.sh@20 -- # IFS=: 00:06:01.156 02:03:15 -- accel/accel.sh@20 -- # read -r var val 00:06:01.156 02:03:15 -- accel/accel.sh@21 -- # val= 00:06:01.156 02:03:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.156 02:03:15 -- accel/accel.sh@20 -- # IFS=: 00:06:01.156 02:03:15 -- accel/accel.sh@20 -- # read -r var val 00:06:01.156 02:03:15 -- accel/accel.sh@21 -- # val=0x1 00:06:01.156 02:03:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.156 02:03:15 -- accel/accel.sh@20 -- # IFS=: 00:06:01.156 02:03:15 -- accel/accel.sh@20 -- # read -r var val 00:06:01.156 02:03:15 -- accel/accel.sh@21 -- # val= 00:06:01.156 02:03:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.156 02:03:15 -- accel/accel.sh@20 -- # IFS=: 00:06:01.156 02:03:15 -- accel/accel.sh@20 -- # read -r var val 00:06:01.156 02:03:15 -- accel/accel.sh@21 -- # val= 00:06:01.156 02:03:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.156 02:03:15 -- accel/accel.sh@20 -- # IFS=: 00:06:01.156 02:03:15 -- accel/accel.sh@20 -- # read -r var val 00:06:01.156 02:03:15 -- accel/accel.sh@21 -- # val=copy 00:06:01.156 02:03:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.156 02:03:15 -- accel/accel.sh@24 -- # accel_opc=copy 00:06:01.156 02:03:15 -- accel/accel.sh@20 -- # IFS=: 00:06:01.156 02:03:15 -- accel/accel.sh@20 -- # read -r var val 00:06:01.156 02:03:15 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:01.156 02:03:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.156 02:03:15 -- accel/accel.sh@20 -- # IFS=: 00:06:01.156 02:03:15 -- accel/accel.sh@20 -- # read -r var val 00:06:01.156 02:03:15 -- accel/accel.sh@21 -- # val= 00:06:01.156 02:03:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.156 02:03:15 -- accel/accel.sh@20 -- # IFS=: 00:06:01.156 02:03:15 -- accel/accel.sh@20 -- # read -r var val 00:06:01.156 02:03:15 -- accel/accel.sh@21 -- # val=software 00:06:01.156 02:03:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.156 02:03:15 -- accel/accel.sh@23 -- # accel_module=software 00:06:01.156 02:03:15 -- accel/accel.sh@20 -- # IFS=: 00:06:01.156 02:03:15 -- accel/accel.sh@20 -- # read -r var val 00:06:01.156 02:03:15 -- accel/accel.sh@21 -- # val=32 00:06:01.156 02:03:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.156 02:03:15 -- accel/accel.sh@20 -- # IFS=: 00:06:01.156 02:03:15 -- accel/accel.sh@20 -- # read -r var val 00:06:01.156 02:03:15 -- accel/accel.sh@21 -- # val=32 00:06:01.156 02:03:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.156 02:03:15 -- accel/accel.sh@20 -- # IFS=: 00:06:01.156 02:03:15 -- accel/accel.sh@20 -- # read -r var val 00:06:01.156 02:03:15 -- accel/accel.sh@21 -- # val=1 00:06:01.156 02:03:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.156 02:03:15 -- accel/accel.sh@20 -- # IFS=: 00:06:01.156 02:03:15 -- accel/accel.sh@20 -- # read -r var val 00:06:01.156 02:03:15 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:01.156 02:03:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.156 02:03:15 -- accel/accel.sh@20 -- # IFS=: 00:06:01.156 02:03:15 -- accel/accel.sh@20 -- # read -r var val 00:06:01.156 02:03:15 -- accel/accel.sh@21 -- # val=Yes 00:06:01.156 02:03:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.156 02:03:15 -- accel/accel.sh@20 -- # IFS=: 00:06:01.156 02:03:15 -- accel/accel.sh@20 -- # read -r var val 00:06:01.156 02:03:15 -- accel/accel.sh@21 -- # val= 00:06:01.156 02:03:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.156 02:03:15 -- accel/accel.sh@20 -- # IFS=: 00:06:01.156 02:03:15 -- accel/accel.sh@20 -- # read -r var val 00:06:01.156 02:03:15 -- accel/accel.sh@21 -- # val= 00:06:01.156 02:03:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.156 02:03:15 -- accel/accel.sh@20 -- # IFS=: 00:06:01.156 02:03:15 -- accel/accel.sh@20 -- # read -r var val 00:06:02.531 02:03:16 -- accel/accel.sh@21 -- # val= 00:06:02.531 02:03:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.531 02:03:16 -- accel/accel.sh@20 -- # IFS=: 00:06:02.531 02:03:16 -- accel/accel.sh@20 -- # read -r var val 00:06:02.531 02:03:16 -- accel/accel.sh@21 -- # val= 00:06:02.531 02:03:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.531 02:03:16 -- accel/accel.sh@20 -- # IFS=: 00:06:02.531 02:03:16 -- accel/accel.sh@20 -- # read -r var val 00:06:02.531 02:03:16 -- accel/accel.sh@21 -- # val= 00:06:02.531 02:03:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.531 02:03:16 -- accel/accel.sh@20 -- # IFS=: 00:06:02.531 02:03:16 -- accel/accel.sh@20 -- # read -r var val 00:06:02.531 02:03:16 -- accel/accel.sh@21 -- # val= 00:06:02.531 02:03:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.531 02:03:16 -- accel/accel.sh@20 -- # IFS=: 00:06:02.531 02:03:16 -- accel/accel.sh@20 -- # read -r var val 00:06:02.531 02:03:16 -- accel/accel.sh@21 -- # val= 00:06:02.531 02:03:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.531 02:03:16 -- accel/accel.sh@20 -- # IFS=: 00:06:02.531 02:03:16 -- accel/accel.sh@20 -- # read -r var val 00:06:02.531 02:03:16 -- accel/accel.sh@21 -- # val= 00:06:02.531 02:03:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.531 02:03:16 -- accel/accel.sh@20 -- # IFS=: 00:06:02.531 02:03:16 -- accel/accel.sh@20 -- # read -r var val 00:06:02.531 02:03:16 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:02.531 02:03:16 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:06:02.531 02:03:16 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:02.531 00:06:02.531 real 0m2.809s 00:06:02.531 user 0m2.448s 00:06:02.531 sys 0m0.153s 00:06:02.531 02:03:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.531 02:03:16 -- common/autotest_common.sh@10 -- # set +x 00:06:02.531 ************************************ 00:06:02.531 END TEST accel_copy 00:06:02.531 ************************************ 00:06:02.531 02:03:16 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:02.531 02:03:16 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:06:02.531 02:03:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:02.531 02:03:16 -- common/autotest_common.sh@10 -- # set +x 00:06:02.531 ************************************ 00:06:02.531 START TEST accel_fill 00:06:02.531 ************************************ 00:06:02.531 02:03:16 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:02.531 02:03:16 -- accel/accel.sh@16 -- # local accel_opc 00:06:02.531 02:03:16 -- accel/accel.sh@17 -- # local accel_module 00:06:02.531 02:03:16 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:02.531 02:03:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:02.531 02:03:16 -- accel/accel.sh@12 -- # build_accel_config 00:06:02.531 02:03:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:02.531 02:03:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.531 02:03:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.531 02:03:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:02.531 02:03:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:02.531 02:03:16 -- accel/accel.sh@41 -- # local IFS=, 00:06:02.531 02:03:16 -- accel/accel.sh@42 -- # jq -r . 00:06:02.531 [2024-05-14 02:03:16.751406] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:02.531 [2024-05-14 02:03:16.751502] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58584 ] 00:06:02.531 [2024-05-14 02:03:16.887979] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.531 [2024-05-14 02:03:16.955518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.910 02:03:18 -- accel/accel.sh@18 -- # out=' 00:06:03.910 SPDK Configuration: 00:06:03.910 Core mask: 0x1 00:06:03.910 00:06:03.910 Accel Perf Configuration: 00:06:03.910 Workload Type: fill 00:06:03.910 Fill pattern: 0x80 00:06:03.910 Transfer size: 4096 bytes 00:06:03.910 Vector count 1 00:06:03.910 Module: software 00:06:03.910 Queue depth: 64 00:06:03.910 Allocate depth: 64 00:06:03.910 # threads/core: 1 00:06:03.910 Run time: 1 seconds 00:06:03.910 Verify: Yes 00:06:03.910 00:06:03.911 Running for 1 seconds... 00:06:03.911 00:06:03.911 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:03.911 ------------------------------------------------------------------------------------ 00:06:03.911 0,0 422528/s 1650 MiB/s 0 0 00:06:03.911 ==================================================================================== 00:06:03.911 Total 422528/s 1650 MiB/s 0 0' 00:06:03.911 02:03:18 -- accel/accel.sh@20 -- # IFS=: 00:06:03.911 02:03:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:03.911 02:03:18 -- accel/accel.sh@20 -- # read -r var val 00:06:03.911 02:03:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:03.911 02:03:18 -- accel/accel.sh@12 -- # build_accel_config 00:06:03.911 02:03:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:03.911 02:03:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:03.911 02:03:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:03.911 02:03:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:03.911 02:03:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:03.911 02:03:18 -- accel/accel.sh@41 -- # local IFS=, 00:06:03.911 02:03:18 -- accel/accel.sh@42 -- # jq -r . 00:06:03.911 [2024-05-14 02:03:18.147385] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:03.911 [2024-05-14 02:03:18.147474] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58604 ] 00:06:03.911 [2024-05-14 02:03:18.287310] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.911 [2024-05-14 02:03:18.344417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.911 02:03:18 -- accel/accel.sh@21 -- # val= 00:06:03.911 02:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.911 02:03:18 -- accel/accel.sh@20 -- # IFS=: 00:06:03.911 02:03:18 -- accel/accel.sh@20 -- # read -r var val 00:06:03.911 02:03:18 -- accel/accel.sh@21 -- # val= 00:06:03.911 02:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.911 02:03:18 -- accel/accel.sh@20 -- # IFS=: 00:06:03.911 02:03:18 -- accel/accel.sh@20 -- # read -r var val 00:06:03.911 02:03:18 -- accel/accel.sh@21 -- # val=0x1 00:06:03.911 02:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.911 02:03:18 -- accel/accel.sh@20 -- # IFS=: 00:06:03.911 02:03:18 -- accel/accel.sh@20 -- # read -r var val 00:06:03.911 02:03:18 -- accel/accel.sh@21 -- # val= 00:06:03.911 02:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.911 02:03:18 -- accel/accel.sh@20 -- # IFS=: 00:06:03.911 02:03:18 -- accel/accel.sh@20 -- # read -r var val 00:06:03.911 02:03:18 -- accel/accel.sh@21 -- # val= 00:06:03.911 02:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.911 02:03:18 -- accel/accel.sh@20 -- # IFS=: 00:06:03.911 02:03:18 -- accel/accel.sh@20 -- # read -r var val 00:06:03.911 02:03:18 -- accel/accel.sh@21 -- # val=fill 00:06:03.911 02:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.911 02:03:18 -- accel/accel.sh@24 -- # accel_opc=fill 00:06:03.911 02:03:18 -- accel/accel.sh@20 -- # IFS=: 00:06:03.911 02:03:18 -- accel/accel.sh@20 -- # read -r var val 00:06:03.911 02:03:18 -- accel/accel.sh@21 -- # val=0x80 00:06:03.911 02:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.911 02:03:18 -- accel/accel.sh@20 -- # IFS=: 00:06:03.911 02:03:18 -- accel/accel.sh@20 -- # read -r var val 00:06:03.911 02:03:18 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:03.911 02:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.911 02:03:18 -- accel/accel.sh@20 -- # IFS=: 00:06:03.911 02:03:18 -- accel/accel.sh@20 -- # read -r var val 00:06:03.911 02:03:18 -- accel/accel.sh@21 -- # val= 00:06:03.911 02:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.911 02:03:18 -- accel/accel.sh@20 -- # IFS=: 00:06:03.911 02:03:18 -- accel/accel.sh@20 -- # read -r var val 00:06:03.911 02:03:18 -- accel/accel.sh@21 -- # val=software 00:06:03.911 02:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.911 02:03:18 -- accel/accel.sh@23 -- # accel_module=software 00:06:03.911 02:03:18 -- accel/accel.sh@20 -- # IFS=: 00:06:03.911 02:03:18 -- accel/accel.sh@20 -- # read -r var val 00:06:03.911 02:03:18 -- accel/accel.sh@21 -- # val=64 00:06:03.911 02:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.911 02:03:18 -- accel/accel.sh@20 -- # IFS=: 00:06:03.911 02:03:18 -- accel/accel.sh@20 -- # read -r var val 00:06:03.911 02:03:18 -- accel/accel.sh@21 -- # val=64 00:06:03.911 02:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.911 02:03:18 -- accel/accel.sh@20 -- # IFS=: 00:06:03.911 02:03:18 -- accel/accel.sh@20 -- # read -r var val 00:06:03.911 02:03:18 -- accel/accel.sh@21 -- # val=1 00:06:03.911 02:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.911 02:03:18 -- accel/accel.sh@20 -- # IFS=: 00:06:03.911 02:03:18 -- accel/accel.sh@20 -- # read -r var val 00:06:03.911 02:03:18 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:03.911 02:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.911 02:03:18 -- accel/accel.sh@20 -- # IFS=: 00:06:03.911 02:03:18 -- accel/accel.sh@20 -- # read -r var val 00:06:03.911 02:03:18 -- accel/accel.sh@21 -- # val=Yes 00:06:03.911 02:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.911 02:03:18 -- accel/accel.sh@20 -- # IFS=: 00:06:03.911 02:03:18 -- accel/accel.sh@20 -- # read -r var val 00:06:03.911 02:03:18 -- accel/accel.sh@21 -- # val= 00:06:03.911 02:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.911 02:03:18 -- accel/accel.sh@20 -- # IFS=: 00:06:03.911 02:03:18 -- accel/accel.sh@20 -- # read -r var val 00:06:03.911 02:03:18 -- accel/accel.sh@21 -- # val= 00:06:03.911 02:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.911 02:03:18 -- accel/accel.sh@20 -- # IFS=: 00:06:03.911 02:03:18 -- accel/accel.sh@20 -- # read -r var val 00:06:05.286 02:03:19 -- accel/accel.sh@21 -- # val= 00:06:05.286 02:03:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.286 02:03:19 -- accel/accel.sh@20 -- # IFS=: 00:06:05.286 02:03:19 -- accel/accel.sh@20 -- # read -r var val 00:06:05.286 02:03:19 -- accel/accel.sh@21 -- # val= 00:06:05.286 02:03:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.286 02:03:19 -- accel/accel.sh@20 -- # IFS=: 00:06:05.286 02:03:19 -- accel/accel.sh@20 -- # read -r var val 00:06:05.286 02:03:19 -- accel/accel.sh@21 -- # val= 00:06:05.286 02:03:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.286 02:03:19 -- accel/accel.sh@20 -- # IFS=: 00:06:05.286 02:03:19 -- accel/accel.sh@20 -- # read -r var val 00:06:05.286 02:03:19 -- accel/accel.sh@21 -- # val= 00:06:05.286 02:03:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.286 02:03:19 -- accel/accel.sh@20 -- # IFS=: 00:06:05.286 02:03:19 -- accel/accel.sh@20 -- # read -r var val 00:06:05.286 02:03:19 -- accel/accel.sh@21 -- # val= 00:06:05.286 02:03:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.286 02:03:19 -- accel/accel.sh@20 -- # IFS=: 00:06:05.286 02:03:19 -- accel/accel.sh@20 -- # read -r var val 00:06:05.286 02:03:19 -- accel/accel.sh@21 -- # val= 00:06:05.286 02:03:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.286 02:03:19 -- accel/accel.sh@20 -- # IFS=: 00:06:05.286 02:03:19 -- accel/accel.sh@20 -- # read -r var val 00:06:05.286 02:03:19 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:05.286 02:03:19 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:06:05.286 02:03:19 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:05.286 00:06:05.286 real 0m2.784s 00:06:05.286 user 0m2.433s 00:06:05.286 sys 0m0.144s 00:06:05.286 02:03:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.286 02:03:19 -- common/autotest_common.sh@10 -- # set +x 00:06:05.286 ************************************ 00:06:05.286 END TEST accel_fill 00:06:05.286 ************************************ 00:06:05.286 02:03:19 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:05.286 02:03:19 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:05.286 02:03:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:05.286 02:03:19 -- common/autotest_common.sh@10 -- # set +x 00:06:05.286 ************************************ 00:06:05.286 START TEST accel_copy_crc32c 00:06:05.286 ************************************ 00:06:05.286 02:03:19 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:06:05.286 02:03:19 -- accel/accel.sh@16 -- # local accel_opc 00:06:05.287 02:03:19 -- accel/accel.sh@17 -- # local accel_module 00:06:05.287 02:03:19 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:05.287 02:03:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:05.287 02:03:19 -- accel/accel.sh@12 -- # build_accel_config 00:06:05.287 02:03:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:05.287 02:03:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.287 02:03:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.287 02:03:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:05.287 02:03:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:05.287 02:03:19 -- accel/accel.sh@41 -- # local IFS=, 00:06:05.287 02:03:19 -- accel/accel.sh@42 -- # jq -r . 00:06:05.287 [2024-05-14 02:03:19.576992] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:05.287 [2024-05-14 02:03:19.577076] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58633 ] 00:06:05.287 [2024-05-14 02:03:19.712782] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.287 [2024-05-14 02:03:19.768740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.661 02:03:20 -- accel/accel.sh@18 -- # out=' 00:06:06.661 SPDK Configuration: 00:06:06.661 Core mask: 0x1 00:06:06.661 00:06:06.661 Accel Perf Configuration: 00:06:06.661 Workload Type: copy_crc32c 00:06:06.661 CRC-32C seed: 0 00:06:06.661 Vector size: 4096 bytes 00:06:06.661 Transfer size: 4096 bytes 00:06:06.661 Vector count 1 00:06:06.661 Module: software 00:06:06.661 Queue depth: 32 00:06:06.661 Allocate depth: 32 00:06:06.662 # threads/core: 1 00:06:06.662 Run time: 1 seconds 00:06:06.662 Verify: Yes 00:06:06.662 00:06:06.662 Running for 1 seconds... 00:06:06.662 00:06:06.662 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:06.662 ------------------------------------------------------------------------------------ 00:06:06.662 0,0 237376/s 927 MiB/s 0 0 00:06:06.662 ==================================================================================== 00:06:06.662 Total 237376/s 927 MiB/s 0 0' 00:06:06.662 02:03:20 -- accel/accel.sh@20 -- # IFS=: 00:06:06.662 02:03:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:06.662 02:03:20 -- accel/accel.sh@20 -- # read -r var val 00:06:06.662 02:03:20 -- accel/accel.sh@12 -- # build_accel_config 00:06:06.662 02:03:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:06.662 02:03:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:06.662 02:03:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.662 02:03:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.662 02:03:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:06.662 02:03:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:06.662 02:03:20 -- accel/accel.sh@41 -- # local IFS=, 00:06:06.662 02:03:20 -- accel/accel.sh@42 -- # jq -r . 00:06:06.662 [2024-05-14 02:03:20.958755] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:06.662 [2024-05-14 02:03:20.958861] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58658 ] 00:06:06.662 [2024-05-14 02:03:21.095539] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.662 [2024-05-14 02:03:21.151328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.662 02:03:21 -- accel/accel.sh@21 -- # val= 00:06:06.662 02:03:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.662 02:03:21 -- accel/accel.sh@20 -- # IFS=: 00:06:06.662 02:03:21 -- accel/accel.sh@20 -- # read -r var val 00:06:06.662 02:03:21 -- accel/accel.sh@21 -- # val= 00:06:06.662 02:03:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.662 02:03:21 -- accel/accel.sh@20 -- # IFS=: 00:06:06.662 02:03:21 -- accel/accel.sh@20 -- # read -r var val 00:06:06.662 02:03:21 -- accel/accel.sh@21 -- # val=0x1 00:06:06.662 02:03:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.662 02:03:21 -- accel/accel.sh@20 -- # IFS=: 00:06:06.662 02:03:21 -- accel/accel.sh@20 -- # read -r var val 00:06:06.662 02:03:21 -- accel/accel.sh@21 -- # val= 00:06:06.662 02:03:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.662 02:03:21 -- accel/accel.sh@20 -- # IFS=: 00:06:06.662 02:03:21 -- accel/accel.sh@20 -- # read -r var val 00:06:06.662 02:03:21 -- accel/accel.sh@21 -- # val= 00:06:06.662 02:03:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.662 02:03:21 -- accel/accel.sh@20 -- # IFS=: 00:06:06.662 02:03:21 -- accel/accel.sh@20 -- # read -r var val 00:06:06.662 02:03:21 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:06.662 02:03:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.662 02:03:21 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:06.662 02:03:21 -- accel/accel.sh@20 -- # IFS=: 00:06:06.662 02:03:21 -- accel/accel.sh@20 -- # read -r var val 00:06:06.662 02:03:21 -- accel/accel.sh@21 -- # val=0 00:06:06.662 02:03:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.662 02:03:21 -- accel/accel.sh@20 -- # IFS=: 00:06:06.662 02:03:21 -- accel/accel.sh@20 -- # read -r var val 00:06:06.662 02:03:21 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:06.662 02:03:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.662 02:03:21 -- accel/accel.sh@20 -- # IFS=: 00:06:06.662 02:03:21 -- accel/accel.sh@20 -- # read -r var val 00:06:06.662 02:03:21 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:06.662 02:03:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.662 02:03:21 -- accel/accel.sh@20 -- # IFS=: 00:06:06.662 02:03:21 -- accel/accel.sh@20 -- # read -r var val 00:06:06.662 02:03:21 -- accel/accel.sh@21 -- # val= 00:06:06.662 02:03:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.662 02:03:21 -- accel/accel.sh@20 -- # IFS=: 00:06:06.662 02:03:21 -- accel/accel.sh@20 -- # read -r var val 00:06:06.662 02:03:21 -- accel/accel.sh@21 -- # val=software 00:06:06.662 02:03:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.662 02:03:21 -- accel/accel.sh@23 -- # accel_module=software 00:06:06.662 02:03:21 -- accel/accel.sh@20 -- # IFS=: 00:06:06.662 02:03:21 -- accel/accel.sh@20 -- # read -r var val 00:06:06.662 02:03:21 -- accel/accel.sh@21 -- # val=32 00:06:06.662 02:03:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.662 02:03:21 -- accel/accel.sh@20 -- # IFS=: 00:06:06.662 02:03:21 -- accel/accel.sh@20 -- # read -r var val 00:06:06.662 02:03:21 -- accel/accel.sh@21 -- # val=32 00:06:06.662 02:03:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.662 02:03:21 -- accel/accel.sh@20 -- # IFS=: 00:06:06.662 02:03:21 -- accel/accel.sh@20 -- # read -r var val 00:06:06.662 02:03:21 -- accel/accel.sh@21 -- # val=1 00:06:06.662 02:03:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.662 02:03:21 -- accel/accel.sh@20 -- # IFS=: 00:06:06.662 02:03:21 -- accel/accel.sh@20 -- # read -r var val 00:06:06.662 02:03:21 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:06.662 02:03:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.662 02:03:21 -- accel/accel.sh@20 -- # IFS=: 00:06:06.662 02:03:21 -- accel/accel.sh@20 -- # read -r var val 00:06:06.662 02:03:21 -- accel/accel.sh@21 -- # val=Yes 00:06:06.662 02:03:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.662 02:03:21 -- accel/accel.sh@20 -- # IFS=: 00:06:06.662 02:03:21 -- accel/accel.sh@20 -- # read -r var val 00:06:06.662 02:03:21 -- accel/accel.sh@21 -- # val= 00:06:06.662 02:03:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.662 02:03:21 -- accel/accel.sh@20 -- # IFS=: 00:06:06.662 02:03:21 -- accel/accel.sh@20 -- # read -r var val 00:06:06.662 02:03:21 -- accel/accel.sh@21 -- # val= 00:06:06.662 02:03:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.662 02:03:21 -- accel/accel.sh@20 -- # IFS=: 00:06:06.662 02:03:21 -- accel/accel.sh@20 -- # read -r var val 00:06:08.082 02:03:22 -- accel/accel.sh@21 -- # val= 00:06:08.082 02:03:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.082 02:03:22 -- accel/accel.sh@20 -- # IFS=: 00:06:08.082 02:03:22 -- accel/accel.sh@20 -- # read -r var val 00:06:08.082 02:03:22 -- accel/accel.sh@21 -- # val= 00:06:08.082 02:03:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.082 02:03:22 -- accel/accel.sh@20 -- # IFS=: 00:06:08.082 02:03:22 -- accel/accel.sh@20 -- # read -r var val 00:06:08.082 02:03:22 -- accel/accel.sh@21 -- # val= 00:06:08.082 02:03:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.082 02:03:22 -- accel/accel.sh@20 -- # IFS=: 00:06:08.082 02:03:22 -- accel/accel.sh@20 -- # read -r var val 00:06:08.082 02:03:22 -- accel/accel.sh@21 -- # val= 00:06:08.082 02:03:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.082 02:03:22 -- accel/accel.sh@20 -- # IFS=: 00:06:08.082 02:03:22 -- accel/accel.sh@20 -- # read -r var val 00:06:08.082 02:03:22 -- accel/accel.sh@21 -- # val= 00:06:08.082 02:03:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.082 02:03:22 -- accel/accel.sh@20 -- # IFS=: 00:06:08.082 02:03:22 -- accel/accel.sh@20 -- # read -r var val 00:06:08.082 02:03:22 -- accel/accel.sh@21 -- # val= 00:06:08.082 02:03:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.082 02:03:22 -- accel/accel.sh@20 -- # IFS=: 00:06:08.082 02:03:22 -- accel/accel.sh@20 -- # read -r var val 00:06:08.082 02:03:22 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:08.082 02:03:22 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:08.082 02:03:22 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:08.082 00:06:08.082 real 0m2.766s 00:06:08.082 user 0m2.439s 00:06:08.082 sys 0m0.124s 00:06:08.082 02:03:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.082 02:03:22 -- common/autotest_common.sh@10 -- # set +x 00:06:08.082 ************************************ 00:06:08.082 END TEST accel_copy_crc32c 00:06:08.082 ************************************ 00:06:08.082 02:03:22 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:08.082 02:03:22 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:08.082 02:03:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:08.082 02:03:22 -- common/autotest_common.sh@10 -- # set +x 00:06:08.082 ************************************ 00:06:08.082 START TEST accel_copy_crc32c_C2 00:06:08.082 ************************************ 00:06:08.082 02:03:22 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:08.082 02:03:22 -- accel/accel.sh@16 -- # local accel_opc 00:06:08.082 02:03:22 -- accel/accel.sh@17 -- # local accel_module 00:06:08.082 02:03:22 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:08.082 02:03:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:08.082 02:03:22 -- accel/accel.sh@12 -- # build_accel_config 00:06:08.082 02:03:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:08.082 02:03:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.082 02:03:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.082 02:03:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:08.082 02:03:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:08.082 02:03:22 -- accel/accel.sh@41 -- # local IFS=, 00:06:08.082 02:03:22 -- accel/accel.sh@42 -- # jq -r . 00:06:08.082 [2024-05-14 02:03:22.387630] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:08.082 [2024-05-14 02:03:22.387734] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58687 ] 00:06:08.082 [2024-05-14 02:03:22.530003] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.083 [2024-05-14 02:03:22.603642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.459 02:03:23 -- accel/accel.sh@18 -- # out=' 00:06:09.459 SPDK Configuration: 00:06:09.459 Core mask: 0x1 00:06:09.459 00:06:09.459 Accel Perf Configuration: 00:06:09.459 Workload Type: copy_crc32c 00:06:09.459 CRC-32C seed: 0 00:06:09.459 Vector size: 4096 bytes 00:06:09.459 Transfer size: 8192 bytes 00:06:09.459 Vector count 2 00:06:09.459 Module: software 00:06:09.459 Queue depth: 32 00:06:09.459 Allocate depth: 32 00:06:09.459 # threads/core: 1 00:06:09.459 Run time: 1 seconds 00:06:09.459 Verify: Yes 00:06:09.459 00:06:09.459 Running for 1 seconds... 00:06:09.459 00:06:09.459 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:09.459 ------------------------------------------------------------------------------------ 00:06:09.459 0,0 166144/s 1298 MiB/s 0 0 00:06:09.459 ==================================================================================== 00:06:09.459 Total 166144/s 649 MiB/s 0 0' 00:06:09.459 02:03:23 -- accel/accel.sh@20 -- # IFS=: 00:06:09.459 02:03:23 -- accel/accel.sh@20 -- # read -r var val 00:06:09.459 02:03:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:09.459 02:03:23 -- accel/accel.sh@12 -- # build_accel_config 00:06:09.459 02:03:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:09.459 02:03:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:09.459 02:03:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.459 02:03:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.459 02:03:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:09.459 02:03:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:09.459 02:03:23 -- accel/accel.sh@41 -- # local IFS=, 00:06:09.459 02:03:23 -- accel/accel.sh@42 -- # jq -r . 00:06:09.459 [2024-05-14 02:03:23.796504] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:09.459 [2024-05-14 02:03:23.796599] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58705 ] 00:06:09.459 [2024-05-14 02:03:23.931208] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.459 [2024-05-14 02:03:23.987571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.459 02:03:24 -- accel/accel.sh@21 -- # val= 00:06:09.459 02:03:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.459 02:03:24 -- accel/accel.sh@20 -- # IFS=: 00:06:09.459 02:03:24 -- accel/accel.sh@20 -- # read -r var val 00:06:09.459 02:03:24 -- accel/accel.sh@21 -- # val= 00:06:09.459 02:03:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.459 02:03:24 -- accel/accel.sh@20 -- # IFS=: 00:06:09.459 02:03:24 -- accel/accel.sh@20 -- # read -r var val 00:06:09.459 02:03:24 -- accel/accel.sh@21 -- # val=0x1 00:06:09.459 02:03:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.459 02:03:24 -- accel/accel.sh@20 -- # IFS=: 00:06:09.459 02:03:24 -- accel/accel.sh@20 -- # read -r var val 00:06:09.459 02:03:24 -- accel/accel.sh@21 -- # val= 00:06:09.459 02:03:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.459 02:03:24 -- accel/accel.sh@20 -- # IFS=: 00:06:09.459 02:03:24 -- accel/accel.sh@20 -- # read -r var val 00:06:09.459 02:03:24 -- accel/accel.sh@21 -- # val= 00:06:09.459 02:03:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.459 02:03:24 -- accel/accel.sh@20 -- # IFS=: 00:06:09.459 02:03:24 -- accel/accel.sh@20 -- # read -r var val 00:06:09.459 02:03:24 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:09.459 02:03:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.459 02:03:24 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:09.459 02:03:24 -- accel/accel.sh@20 -- # IFS=: 00:06:09.459 02:03:24 -- accel/accel.sh@20 -- # read -r var val 00:06:09.459 02:03:24 -- accel/accel.sh@21 -- # val=0 00:06:09.459 02:03:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.459 02:03:24 -- accel/accel.sh@20 -- # IFS=: 00:06:09.459 02:03:24 -- accel/accel.sh@20 -- # read -r var val 00:06:09.459 02:03:24 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:09.459 02:03:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.459 02:03:24 -- accel/accel.sh@20 -- # IFS=: 00:06:09.459 02:03:24 -- accel/accel.sh@20 -- # read -r var val 00:06:09.459 02:03:24 -- accel/accel.sh@21 -- # val='8192 bytes' 00:06:09.459 02:03:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.459 02:03:24 -- accel/accel.sh@20 -- # IFS=: 00:06:09.459 02:03:24 -- accel/accel.sh@20 -- # read -r var val 00:06:09.459 02:03:24 -- accel/accel.sh@21 -- # val= 00:06:09.459 02:03:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.459 02:03:24 -- accel/accel.sh@20 -- # IFS=: 00:06:09.459 02:03:24 -- accel/accel.sh@20 -- # read -r var val 00:06:09.459 02:03:24 -- accel/accel.sh@21 -- # val=software 00:06:09.459 02:03:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.459 02:03:24 -- accel/accel.sh@23 -- # accel_module=software 00:06:09.459 02:03:24 -- accel/accel.sh@20 -- # IFS=: 00:06:09.459 02:03:24 -- accel/accel.sh@20 -- # read -r var val 00:06:09.459 02:03:24 -- accel/accel.sh@21 -- # val=32 00:06:09.459 02:03:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.459 02:03:24 -- accel/accel.sh@20 -- # IFS=: 00:06:09.459 02:03:24 -- accel/accel.sh@20 -- # read -r var val 00:06:09.459 02:03:24 -- accel/accel.sh@21 -- # val=32 00:06:09.459 02:03:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.459 02:03:24 -- accel/accel.sh@20 -- # IFS=: 00:06:09.459 02:03:24 -- accel/accel.sh@20 -- # read -r var val 00:06:09.459 02:03:24 -- accel/accel.sh@21 -- # val=1 00:06:09.459 02:03:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.459 02:03:24 -- accel/accel.sh@20 -- # IFS=: 00:06:09.459 02:03:24 -- accel/accel.sh@20 -- # read -r var val 00:06:09.459 02:03:24 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:09.459 02:03:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.459 02:03:24 -- accel/accel.sh@20 -- # IFS=: 00:06:09.459 02:03:24 -- accel/accel.sh@20 -- # read -r var val 00:06:09.459 02:03:24 -- accel/accel.sh@21 -- # val=Yes 00:06:09.459 02:03:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.459 02:03:24 -- accel/accel.sh@20 -- # IFS=: 00:06:09.459 02:03:24 -- accel/accel.sh@20 -- # read -r var val 00:06:09.459 02:03:24 -- accel/accel.sh@21 -- # val= 00:06:09.459 02:03:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.459 02:03:24 -- accel/accel.sh@20 -- # IFS=: 00:06:09.459 02:03:24 -- accel/accel.sh@20 -- # read -r var val 00:06:09.459 02:03:24 -- accel/accel.sh@21 -- # val= 00:06:09.459 02:03:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.459 02:03:24 -- accel/accel.sh@20 -- # IFS=: 00:06:09.459 02:03:24 -- accel/accel.sh@20 -- # read -r var val 00:06:10.833 02:03:25 -- accel/accel.sh@21 -- # val= 00:06:10.833 02:03:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.833 02:03:25 -- accel/accel.sh@20 -- # IFS=: 00:06:10.833 02:03:25 -- accel/accel.sh@20 -- # read -r var val 00:06:10.833 02:03:25 -- accel/accel.sh@21 -- # val= 00:06:10.833 02:03:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.833 02:03:25 -- accel/accel.sh@20 -- # IFS=: 00:06:10.833 02:03:25 -- accel/accel.sh@20 -- # read -r var val 00:06:10.833 02:03:25 -- accel/accel.sh@21 -- # val= 00:06:10.833 02:03:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.833 02:03:25 -- accel/accel.sh@20 -- # IFS=: 00:06:10.833 02:03:25 -- accel/accel.sh@20 -- # read -r var val 00:06:10.833 02:03:25 -- accel/accel.sh@21 -- # val= 00:06:10.833 02:03:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.833 02:03:25 -- accel/accel.sh@20 -- # IFS=: 00:06:10.833 02:03:25 -- accel/accel.sh@20 -- # read -r var val 00:06:10.833 02:03:25 -- accel/accel.sh@21 -- # val= 00:06:10.833 02:03:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.833 02:03:25 -- accel/accel.sh@20 -- # IFS=: 00:06:10.833 02:03:25 -- accel/accel.sh@20 -- # read -r var val 00:06:10.833 02:03:25 -- accel/accel.sh@21 -- # val= 00:06:10.833 02:03:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.833 02:03:25 -- accel/accel.sh@20 -- # IFS=: 00:06:10.833 02:03:25 -- accel/accel.sh@20 -- # read -r var val 00:06:10.833 ************************************ 00:06:10.833 END TEST accel_copy_crc32c_C2 00:06:10.833 ************************************ 00:06:10.833 02:03:25 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:10.833 02:03:25 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:10.833 02:03:25 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:10.833 00:06:10.833 real 0m2.790s 00:06:10.833 user 0m2.432s 00:06:10.833 sys 0m0.151s 00:06:10.833 02:03:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.833 02:03:25 -- common/autotest_common.sh@10 -- # set +x 00:06:10.833 02:03:25 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:10.833 02:03:25 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:10.833 02:03:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:10.833 02:03:25 -- common/autotest_common.sh@10 -- # set +x 00:06:10.833 ************************************ 00:06:10.833 START TEST accel_dualcast 00:06:10.833 ************************************ 00:06:10.833 02:03:25 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:06:10.833 02:03:25 -- accel/accel.sh@16 -- # local accel_opc 00:06:10.833 02:03:25 -- accel/accel.sh@17 -- # local accel_module 00:06:10.833 02:03:25 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:06:10.833 02:03:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:10.833 02:03:25 -- accel/accel.sh@12 -- # build_accel_config 00:06:10.833 02:03:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:10.833 02:03:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.833 02:03:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.833 02:03:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:10.833 02:03:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:10.833 02:03:25 -- accel/accel.sh@41 -- # local IFS=, 00:06:10.833 02:03:25 -- accel/accel.sh@42 -- # jq -r . 00:06:10.833 [2024-05-14 02:03:25.233517] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:10.833 [2024-05-14 02:03:25.233793] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58741 ] 00:06:10.833 [2024-05-14 02:03:25.371403] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.091 [2024-05-14 02:03:25.433063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.023 02:03:26 -- accel/accel.sh@18 -- # out=' 00:06:12.023 SPDK Configuration: 00:06:12.023 Core mask: 0x1 00:06:12.023 00:06:12.023 Accel Perf Configuration: 00:06:12.023 Workload Type: dualcast 00:06:12.023 Transfer size: 4096 bytes 00:06:12.023 Vector count 1 00:06:12.023 Module: software 00:06:12.023 Queue depth: 32 00:06:12.023 Allocate depth: 32 00:06:12.023 # threads/core: 1 00:06:12.023 Run time: 1 seconds 00:06:12.023 Verify: Yes 00:06:12.023 00:06:12.023 Running for 1 seconds... 00:06:12.023 00:06:12.023 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:12.023 ------------------------------------------------------------------------------------ 00:06:12.023 0,0 322656/s 1260 MiB/s 0 0 00:06:12.023 ==================================================================================== 00:06:12.023 Total 322656/s 1260 MiB/s 0 0' 00:06:12.023 02:03:26 -- accel/accel.sh@20 -- # IFS=: 00:06:12.023 02:03:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:12.023 02:03:26 -- accel/accel.sh@20 -- # read -r var val 00:06:12.023 02:03:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:12.023 02:03:26 -- accel/accel.sh@12 -- # build_accel_config 00:06:12.023 02:03:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:12.023 02:03:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.023 02:03:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.023 02:03:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:12.023 02:03:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:12.023 02:03:26 -- accel/accel.sh@41 -- # local IFS=, 00:06:12.023 02:03:26 -- accel/accel.sh@42 -- # jq -r . 00:06:12.281 [2024-05-14 02:03:26.629967] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:12.281 [2024-05-14 02:03:26.630052] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58755 ] 00:06:12.281 [2024-05-14 02:03:26.765901] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.281 [2024-05-14 02:03:26.825454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.281 02:03:26 -- accel/accel.sh@21 -- # val= 00:06:12.281 02:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.281 02:03:26 -- accel/accel.sh@20 -- # IFS=: 00:06:12.281 02:03:26 -- accel/accel.sh@20 -- # read -r var val 00:06:12.281 02:03:26 -- accel/accel.sh@21 -- # val= 00:06:12.281 02:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.281 02:03:26 -- accel/accel.sh@20 -- # IFS=: 00:06:12.281 02:03:26 -- accel/accel.sh@20 -- # read -r var val 00:06:12.281 02:03:26 -- accel/accel.sh@21 -- # val=0x1 00:06:12.281 02:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.281 02:03:26 -- accel/accel.sh@20 -- # IFS=: 00:06:12.281 02:03:26 -- accel/accel.sh@20 -- # read -r var val 00:06:12.281 02:03:26 -- accel/accel.sh@21 -- # val= 00:06:12.281 02:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.281 02:03:26 -- accel/accel.sh@20 -- # IFS=: 00:06:12.281 02:03:26 -- accel/accel.sh@20 -- # read -r var val 00:06:12.281 02:03:26 -- accel/accel.sh@21 -- # val= 00:06:12.281 02:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.281 02:03:26 -- accel/accel.sh@20 -- # IFS=: 00:06:12.281 02:03:26 -- accel/accel.sh@20 -- # read -r var val 00:06:12.281 02:03:26 -- accel/accel.sh@21 -- # val=dualcast 00:06:12.281 02:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.281 02:03:26 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:06:12.281 02:03:26 -- accel/accel.sh@20 -- # IFS=: 00:06:12.281 02:03:26 -- accel/accel.sh@20 -- # read -r var val 00:06:12.281 02:03:26 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:12.281 02:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.281 02:03:26 -- accel/accel.sh@20 -- # IFS=: 00:06:12.281 02:03:26 -- accel/accel.sh@20 -- # read -r var val 00:06:12.281 02:03:26 -- accel/accel.sh@21 -- # val= 00:06:12.281 02:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.281 02:03:26 -- accel/accel.sh@20 -- # IFS=: 00:06:12.281 02:03:26 -- accel/accel.sh@20 -- # read -r var val 00:06:12.281 02:03:26 -- accel/accel.sh@21 -- # val=software 00:06:12.281 02:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.281 02:03:26 -- accel/accel.sh@23 -- # accel_module=software 00:06:12.281 02:03:26 -- accel/accel.sh@20 -- # IFS=: 00:06:12.281 02:03:26 -- accel/accel.sh@20 -- # read -r var val 00:06:12.281 02:03:26 -- accel/accel.sh@21 -- # val=32 00:06:12.281 02:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.281 02:03:26 -- accel/accel.sh@20 -- # IFS=: 00:06:12.281 02:03:26 -- accel/accel.sh@20 -- # read -r var val 00:06:12.281 02:03:26 -- accel/accel.sh@21 -- # val=32 00:06:12.281 02:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.281 02:03:26 -- accel/accel.sh@20 -- # IFS=: 00:06:12.281 02:03:26 -- accel/accel.sh@20 -- # read -r var val 00:06:12.281 02:03:26 -- accel/accel.sh@21 -- # val=1 00:06:12.281 02:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.281 02:03:26 -- accel/accel.sh@20 -- # IFS=: 00:06:12.281 02:03:26 -- accel/accel.sh@20 -- # read -r var val 00:06:12.281 02:03:26 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:12.281 02:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.281 02:03:26 -- accel/accel.sh@20 -- # IFS=: 00:06:12.281 02:03:26 -- accel/accel.sh@20 -- # read -r var val 00:06:12.281 02:03:26 -- accel/accel.sh@21 -- # val=Yes 00:06:12.281 02:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.281 02:03:26 -- accel/accel.sh@20 -- # IFS=: 00:06:12.281 02:03:26 -- accel/accel.sh@20 -- # read -r var val 00:06:12.281 02:03:26 -- accel/accel.sh@21 -- # val= 00:06:12.281 02:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.281 02:03:26 -- accel/accel.sh@20 -- # IFS=: 00:06:12.281 02:03:26 -- accel/accel.sh@20 -- # read -r var val 00:06:12.281 02:03:26 -- accel/accel.sh@21 -- # val= 00:06:12.281 02:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.281 02:03:26 -- accel/accel.sh@20 -- # IFS=: 00:06:12.281 02:03:26 -- accel/accel.sh@20 -- # read -r var val 00:06:13.653 02:03:27 -- accel/accel.sh@21 -- # val= 00:06:13.653 02:03:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.653 02:03:27 -- accel/accel.sh@20 -- # IFS=: 00:06:13.653 02:03:27 -- accel/accel.sh@20 -- # read -r var val 00:06:13.653 02:03:27 -- accel/accel.sh@21 -- # val= 00:06:13.653 02:03:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.653 02:03:27 -- accel/accel.sh@20 -- # IFS=: 00:06:13.653 02:03:27 -- accel/accel.sh@20 -- # read -r var val 00:06:13.653 02:03:27 -- accel/accel.sh@21 -- # val= 00:06:13.653 02:03:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.653 02:03:27 -- accel/accel.sh@20 -- # IFS=: 00:06:13.653 02:03:27 -- accel/accel.sh@20 -- # read -r var val 00:06:13.653 02:03:27 -- accel/accel.sh@21 -- # val= 00:06:13.653 02:03:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.653 02:03:27 -- accel/accel.sh@20 -- # IFS=: 00:06:13.653 02:03:27 -- accel/accel.sh@20 -- # read -r var val 00:06:13.653 ************************************ 00:06:13.653 END TEST accel_dualcast 00:06:13.653 ************************************ 00:06:13.653 02:03:27 -- accel/accel.sh@21 -- # val= 00:06:13.653 02:03:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.653 02:03:27 -- accel/accel.sh@20 -- # IFS=: 00:06:13.653 02:03:27 -- accel/accel.sh@20 -- # read -r var val 00:06:13.653 02:03:27 -- accel/accel.sh@21 -- # val= 00:06:13.653 02:03:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.653 02:03:27 -- accel/accel.sh@20 -- # IFS=: 00:06:13.653 02:03:27 -- accel/accel.sh@20 -- # read -r var val 00:06:13.653 02:03:27 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:13.653 02:03:27 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:06:13.653 02:03:27 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:13.653 00:06:13.653 real 0m2.788s 00:06:13.653 user 0m2.431s 00:06:13.653 sys 0m0.150s 00:06:13.653 02:03:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.653 02:03:27 -- common/autotest_common.sh@10 -- # set +x 00:06:13.653 02:03:28 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:13.653 02:03:28 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:13.653 02:03:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:13.653 02:03:28 -- common/autotest_common.sh@10 -- # set +x 00:06:13.653 ************************************ 00:06:13.653 START TEST accel_compare 00:06:13.653 ************************************ 00:06:13.653 02:03:28 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:06:13.653 02:03:28 -- accel/accel.sh@16 -- # local accel_opc 00:06:13.653 02:03:28 -- accel/accel.sh@17 -- # local accel_module 00:06:13.653 02:03:28 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:06:13.653 02:03:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:13.653 02:03:28 -- accel/accel.sh@12 -- # build_accel_config 00:06:13.653 02:03:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:13.653 02:03:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.653 02:03:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.653 02:03:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:13.653 02:03:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:13.653 02:03:28 -- accel/accel.sh@41 -- # local IFS=, 00:06:13.653 02:03:28 -- accel/accel.sh@42 -- # jq -r . 00:06:13.653 [2024-05-14 02:03:28.060896] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:13.654 [2024-05-14 02:03:28.060977] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58795 ] 00:06:13.654 [2024-05-14 02:03:28.192922] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.911 [2024-05-14 02:03:28.260129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.871 02:03:29 -- accel/accel.sh@18 -- # out=' 00:06:14.871 SPDK Configuration: 00:06:14.871 Core mask: 0x1 00:06:14.871 00:06:14.871 Accel Perf Configuration: 00:06:14.871 Workload Type: compare 00:06:14.871 Transfer size: 4096 bytes 00:06:14.871 Vector count 1 00:06:14.871 Module: software 00:06:14.871 Queue depth: 32 00:06:14.871 Allocate depth: 32 00:06:14.871 # threads/core: 1 00:06:14.871 Run time: 1 seconds 00:06:14.871 Verify: Yes 00:06:14.871 00:06:14.871 Running for 1 seconds... 00:06:14.871 00:06:14.871 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:14.871 ------------------------------------------------------------------------------------ 00:06:14.871 0,0 385312/s 1505 MiB/s 0 0 00:06:14.871 ==================================================================================== 00:06:14.871 Total 385312/s 1505 MiB/s 0 0' 00:06:14.871 02:03:29 -- accel/accel.sh@20 -- # IFS=: 00:06:14.871 02:03:29 -- accel/accel.sh@20 -- # read -r var val 00:06:14.871 02:03:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:14.871 02:03:29 -- accel/accel.sh@12 -- # build_accel_config 00:06:14.871 02:03:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:14.871 02:03:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:14.871 02:03:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.871 02:03:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.871 02:03:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:14.871 02:03:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:14.871 02:03:29 -- accel/accel.sh@41 -- # local IFS=, 00:06:14.871 02:03:29 -- accel/accel.sh@42 -- # jq -r . 00:06:15.129 [2024-05-14 02:03:29.474739] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:15.129 [2024-05-14 02:03:29.475731] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58809 ] 00:06:15.129 [2024-05-14 02:03:29.617315] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.129 [2024-05-14 02:03:29.688237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.386 02:03:29 -- accel/accel.sh@21 -- # val= 00:06:15.386 02:03:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.386 02:03:29 -- accel/accel.sh@20 -- # IFS=: 00:06:15.386 02:03:29 -- accel/accel.sh@20 -- # read -r var val 00:06:15.386 02:03:29 -- accel/accel.sh@21 -- # val= 00:06:15.386 02:03:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.386 02:03:29 -- accel/accel.sh@20 -- # IFS=: 00:06:15.386 02:03:29 -- accel/accel.sh@20 -- # read -r var val 00:06:15.386 02:03:29 -- accel/accel.sh@21 -- # val=0x1 00:06:15.386 02:03:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.386 02:03:29 -- accel/accel.sh@20 -- # IFS=: 00:06:15.386 02:03:29 -- accel/accel.sh@20 -- # read -r var val 00:06:15.386 02:03:29 -- accel/accel.sh@21 -- # val= 00:06:15.386 02:03:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.386 02:03:29 -- accel/accel.sh@20 -- # IFS=: 00:06:15.386 02:03:29 -- accel/accel.sh@20 -- # read -r var val 00:06:15.386 02:03:29 -- accel/accel.sh@21 -- # val= 00:06:15.386 02:03:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.386 02:03:29 -- accel/accel.sh@20 -- # IFS=: 00:06:15.386 02:03:29 -- accel/accel.sh@20 -- # read -r var val 00:06:15.386 02:03:29 -- accel/accel.sh@21 -- # val=compare 00:06:15.386 02:03:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.386 02:03:29 -- accel/accel.sh@24 -- # accel_opc=compare 00:06:15.386 02:03:29 -- accel/accel.sh@20 -- # IFS=: 00:06:15.386 02:03:29 -- accel/accel.sh@20 -- # read -r var val 00:06:15.386 02:03:29 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:15.386 02:03:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.386 02:03:29 -- accel/accel.sh@20 -- # IFS=: 00:06:15.386 02:03:29 -- accel/accel.sh@20 -- # read -r var val 00:06:15.386 02:03:29 -- accel/accel.sh@21 -- # val= 00:06:15.386 02:03:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.386 02:03:29 -- accel/accel.sh@20 -- # IFS=: 00:06:15.386 02:03:29 -- accel/accel.sh@20 -- # read -r var val 00:06:15.386 02:03:29 -- accel/accel.sh@21 -- # val=software 00:06:15.386 02:03:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.386 02:03:29 -- accel/accel.sh@23 -- # accel_module=software 00:06:15.386 02:03:29 -- accel/accel.sh@20 -- # IFS=: 00:06:15.386 02:03:29 -- accel/accel.sh@20 -- # read -r var val 00:06:15.386 02:03:29 -- accel/accel.sh@21 -- # val=32 00:06:15.386 02:03:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.386 02:03:29 -- accel/accel.sh@20 -- # IFS=: 00:06:15.386 02:03:29 -- accel/accel.sh@20 -- # read -r var val 00:06:15.386 02:03:29 -- accel/accel.sh@21 -- # val=32 00:06:15.386 02:03:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.386 02:03:29 -- accel/accel.sh@20 -- # IFS=: 00:06:15.386 02:03:29 -- accel/accel.sh@20 -- # read -r var val 00:06:15.386 02:03:29 -- accel/accel.sh@21 -- # val=1 00:06:15.386 02:03:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.386 02:03:29 -- accel/accel.sh@20 -- # IFS=: 00:06:15.386 02:03:29 -- accel/accel.sh@20 -- # read -r var val 00:06:15.386 02:03:29 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:15.386 02:03:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.386 02:03:29 -- accel/accel.sh@20 -- # IFS=: 00:06:15.386 02:03:29 -- accel/accel.sh@20 -- # read -r var val 00:06:15.386 02:03:29 -- accel/accel.sh@21 -- # val=Yes 00:06:15.386 02:03:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.386 02:03:29 -- accel/accel.sh@20 -- # IFS=: 00:06:15.386 02:03:29 -- accel/accel.sh@20 -- # read -r var val 00:06:15.386 02:03:29 -- accel/accel.sh@21 -- # val= 00:06:15.386 02:03:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.386 02:03:29 -- accel/accel.sh@20 -- # IFS=: 00:06:15.386 02:03:29 -- accel/accel.sh@20 -- # read -r var val 00:06:15.386 02:03:29 -- accel/accel.sh@21 -- # val= 00:06:15.386 02:03:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.386 02:03:29 -- accel/accel.sh@20 -- # IFS=: 00:06:15.386 02:03:29 -- accel/accel.sh@20 -- # read -r var val 00:06:16.322 02:03:30 -- accel/accel.sh@21 -- # val= 00:06:16.322 02:03:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.322 02:03:30 -- accel/accel.sh@20 -- # IFS=: 00:06:16.322 02:03:30 -- accel/accel.sh@20 -- # read -r var val 00:06:16.322 02:03:30 -- accel/accel.sh@21 -- # val= 00:06:16.322 02:03:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.322 02:03:30 -- accel/accel.sh@20 -- # IFS=: 00:06:16.322 02:03:30 -- accel/accel.sh@20 -- # read -r var val 00:06:16.322 02:03:30 -- accel/accel.sh@21 -- # val= 00:06:16.322 02:03:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.322 02:03:30 -- accel/accel.sh@20 -- # IFS=: 00:06:16.322 02:03:30 -- accel/accel.sh@20 -- # read -r var val 00:06:16.322 02:03:30 -- accel/accel.sh@21 -- # val= 00:06:16.322 02:03:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.322 02:03:30 -- accel/accel.sh@20 -- # IFS=: 00:06:16.322 02:03:30 -- accel/accel.sh@20 -- # read -r var val 00:06:16.322 02:03:30 -- accel/accel.sh@21 -- # val= 00:06:16.322 02:03:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.322 02:03:30 -- accel/accel.sh@20 -- # IFS=: 00:06:16.322 02:03:30 -- accel/accel.sh@20 -- # read -r var val 00:06:16.322 02:03:30 -- accel/accel.sh@21 -- # val= 00:06:16.322 02:03:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.322 02:03:30 -- accel/accel.sh@20 -- # IFS=: 00:06:16.322 02:03:30 -- accel/accel.sh@20 -- # read -r var val 00:06:16.322 02:03:30 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:16.322 02:03:30 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:06:16.322 02:03:30 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:16.322 00:06:16.322 real 0m2.826s 00:06:16.322 user 0m2.456s 00:06:16.322 sys 0m0.155s 00:06:16.322 02:03:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.322 02:03:30 -- common/autotest_common.sh@10 -- # set +x 00:06:16.322 ************************************ 00:06:16.322 END TEST accel_compare 00:06:16.322 ************************************ 00:06:16.322 02:03:30 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:16.322 02:03:30 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:16.322 02:03:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:16.322 02:03:30 -- common/autotest_common.sh@10 -- # set +x 00:06:16.579 ************************************ 00:06:16.579 START TEST accel_xor 00:06:16.579 ************************************ 00:06:16.579 02:03:30 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:06:16.579 02:03:30 -- accel/accel.sh@16 -- # local accel_opc 00:06:16.579 02:03:30 -- accel/accel.sh@17 -- # local accel_module 00:06:16.579 02:03:30 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:06:16.579 02:03:30 -- accel/accel.sh@12 -- # build_accel_config 00:06:16.579 02:03:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:16.579 02:03:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:16.579 02:03:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.579 02:03:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.579 02:03:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:16.579 02:03:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:16.579 02:03:30 -- accel/accel.sh@41 -- # local IFS=, 00:06:16.579 02:03:30 -- accel/accel.sh@42 -- # jq -r . 00:06:16.579 [2024-05-14 02:03:30.936229] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:16.579 [2024-05-14 02:03:30.936325] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58838 ] 00:06:16.580 [2024-05-14 02:03:31.076433] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.580 [2024-05-14 02:03:31.143949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.953 02:03:32 -- accel/accel.sh@18 -- # out=' 00:06:17.953 SPDK Configuration: 00:06:17.953 Core mask: 0x1 00:06:17.953 00:06:17.953 Accel Perf Configuration: 00:06:17.953 Workload Type: xor 00:06:17.953 Source buffers: 2 00:06:17.953 Transfer size: 4096 bytes 00:06:17.953 Vector count 1 00:06:17.953 Module: software 00:06:17.953 Queue depth: 32 00:06:17.953 Allocate depth: 32 00:06:17.953 # threads/core: 1 00:06:17.953 Run time: 1 seconds 00:06:17.953 Verify: Yes 00:06:17.953 00:06:17.953 Running for 1 seconds... 00:06:17.953 00:06:17.953 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:17.953 ------------------------------------------------------------------------------------ 00:06:17.953 0,0 227072/s 887 MiB/s 0 0 00:06:17.953 ==================================================================================== 00:06:17.953 Total 227072/s 887 MiB/s 0 0' 00:06:17.953 02:03:32 -- accel/accel.sh@20 -- # IFS=: 00:06:17.953 02:03:32 -- accel/accel.sh@20 -- # read -r var val 00:06:17.953 02:03:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:17.953 02:03:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:17.953 02:03:32 -- accel/accel.sh@12 -- # build_accel_config 00:06:17.953 02:03:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:17.953 02:03:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.953 02:03:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.953 02:03:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:17.953 02:03:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:17.953 02:03:32 -- accel/accel.sh@41 -- # local IFS=, 00:06:17.953 02:03:32 -- accel/accel.sh@42 -- # jq -r . 00:06:17.953 [2024-05-14 02:03:32.346349] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:17.954 [2024-05-14 02:03:32.346445] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58863 ] 00:06:17.954 [2024-05-14 02:03:32.484089] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.212 [2024-05-14 02:03:32.556892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.212 02:03:32 -- accel/accel.sh@21 -- # val= 00:06:18.212 02:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.212 02:03:32 -- accel/accel.sh@20 -- # IFS=: 00:06:18.212 02:03:32 -- accel/accel.sh@20 -- # read -r var val 00:06:18.212 02:03:32 -- accel/accel.sh@21 -- # val= 00:06:18.212 02:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.212 02:03:32 -- accel/accel.sh@20 -- # IFS=: 00:06:18.212 02:03:32 -- accel/accel.sh@20 -- # read -r var val 00:06:18.212 02:03:32 -- accel/accel.sh@21 -- # val=0x1 00:06:18.212 02:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.212 02:03:32 -- accel/accel.sh@20 -- # IFS=: 00:06:18.212 02:03:32 -- accel/accel.sh@20 -- # read -r var val 00:06:18.212 02:03:32 -- accel/accel.sh@21 -- # val= 00:06:18.212 02:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.212 02:03:32 -- accel/accel.sh@20 -- # IFS=: 00:06:18.212 02:03:32 -- accel/accel.sh@20 -- # read -r var val 00:06:18.212 02:03:32 -- accel/accel.sh@21 -- # val= 00:06:18.212 02:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.212 02:03:32 -- accel/accel.sh@20 -- # IFS=: 00:06:18.212 02:03:32 -- accel/accel.sh@20 -- # read -r var val 00:06:18.212 02:03:32 -- accel/accel.sh@21 -- # val=xor 00:06:18.212 02:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.212 02:03:32 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:18.212 02:03:32 -- accel/accel.sh@20 -- # IFS=: 00:06:18.212 02:03:32 -- accel/accel.sh@20 -- # read -r var val 00:06:18.212 02:03:32 -- accel/accel.sh@21 -- # val=2 00:06:18.212 02:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.212 02:03:32 -- accel/accel.sh@20 -- # IFS=: 00:06:18.212 02:03:32 -- accel/accel.sh@20 -- # read -r var val 00:06:18.212 02:03:32 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:18.212 02:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.212 02:03:32 -- accel/accel.sh@20 -- # IFS=: 00:06:18.212 02:03:32 -- accel/accel.sh@20 -- # read -r var val 00:06:18.212 02:03:32 -- accel/accel.sh@21 -- # val= 00:06:18.212 02:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.212 02:03:32 -- accel/accel.sh@20 -- # IFS=: 00:06:18.212 02:03:32 -- accel/accel.sh@20 -- # read -r var val 00:06:18.212 02:03:32 -- accel/accel.sh@21 -- # val=software 00:06:18.212 02:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.212 02:03:32 -- accel/accel.sh@23 -- # accel_module=software 00:06:18.212 02:03:32 -- accel/accel.sh@20 -- # IFS=: 00:06:18.212 02:03:32 -- accel/accel.sh@20 -- # read -r var val 00:06:18.212 02:03:32 -- accel/accel.sh@21 -- # val=32 00:06:18.212 02:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.212 02:03:32 -- accel/accel.sh@20 -- # IFS=: 00:06:18.212 02:03:32 -- accel/accel.sh@20 -- # read -r var val 00:06:18.212 02:03:32 -- accel/accel.sh@21 -- # val=32 00:06:18.212 02:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.212 02:03:32 -- accel/accel.sh@20 -- # IFS=: 00:06:18.212 02:03:32 -- accel/accel.sh@20 -- # read -r var val 00:06:18.212 02:03:32 -- accel/accel.sh@21 -- # val=1 00:06:18.212 02:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.212 02:03:32 -- accel/accel.sh@20 -- # IFS=: 00:06:18.212 02:03:32 -- accel/accel.sh@20 -- # read -r var val 00:06:18.212 02:03:32 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:18.212 02:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.212 02:03:32 -- accel/accel.sh@20 -- # IFS=: 00:06:18.212 02:03:32 -- accel/accel.sh@20 -- # read -r var val 00:06:18.212 02:03:32 -- accel/accel.sh@21 -- # val=Yes 00:06:18.212 02:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.212 02:03:32 -- accel/accel.sh@20 -- # IFS=: 00:06:18.212 02:03:32 -- accel/accel.sh@20 -- # read -r var val 00:06:18.212 02:03:32 -- accel/accel.sh@21 -- # val= 00:06:18.212 02:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.212 02:03:32 -- accel/accel.sh@20 -- # IFS=: 00:06:18.212 02:03:32 -- accel/accel.sh@20 -- # read -r var val 00:06:18.212 02:03:32 -- accel/accel.sh@21 -- # val= 00:06:18.212 02:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.212 02:03:32 -- accel/accel.sh@20 -- # IFS=: 00:06:18.212 02:03:32 -- accel/accel.sh@20 -- # read -r var val 00:06:19.145 02:03:33 -- accel/accel.sh@21 -- # val= 00:06:19.145 02:03:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.145 02:03:33 -- accel/accel.sh@20 -- # IFS=: 00:06:19.145 02:03:33 -- accel/accel.sh@20 -- # read -r var val 00:06:19.145 02:03:33 -- accel/accel.sh@21 -- # val= 00:06:19.145 02:03:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.145 02:03:33 -- accel/accel.sh@20 -- # IFS=: 00:06:19.145 02:03:33 -- accel/accel.sh@20 -- # read -r var val 00:06:19.145 02:03:33 -- accel/accel.sh@21 -- # val= 00:06:19.145 02:03:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.145 02:03:33 -- accel/accel.sh@20 -- # IFS=: 00:06:19.145 02:03:33 -- accel/accel.sh@20 -- # read -r var val 00:06:19.145 02:03:33 -- accel/accel.sh@21 -- # val= 00:06:19.145 02:03:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.145 02:03:33 -- accel/accel.sh@20 -- # IFS=: 00:06:19.145 02:03:33 -- accel/accel.sh@20 -- # read -r var val 00:06:19.402 02:03:33 -- accel/accel.sh@21 -- # val= 00:06:19.402 02:03:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.402 02:03:33 -- accel/accel.sh@20 -- # IFS=: 00:06:19.402 02:03:33 -- accel/accel.sh@20 -- # read -r var val 00:06:19.402 02:03:33 -- accel/accel.sh@21 -- # val= 00:06:19.402 02:03:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.402 02:03:33 -- accel/accel.sh@20 -- # IFS=: 00:06:19.402 02:03:33 -- accel/accel.sh@20 -- # read -r var val 00:06:19.402 02:03:33 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:19.402 02:03:33 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:19.402 02:03:33 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:19.402 00:06:19.402 real 0m2.822s 00:06:19.402 user 0m2.454s 00:06:19.402 sys 0m0.160s 00:06:19.402 ************************************ 00:06:19.402 END TEST accel_xor 00:06:19.402 ************************************ 00:06:19.402 02:03:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.402 02:03:33 -- common/autotest_common.sh@10 -- # set +x 00:06:19.402 02:03:33 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:19.402 02:03:33 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:19.402 02:03:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:19.402 02:03:33 -- common/autotest_common.sh@10 -- # set +x 00:06:19.402 ************************************ 00:06:19.402 START TEST accel_xor 00:06:19.402 ************************************ 00:06:19.402 02:03:33 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:06:19.402 02:03:33 -- accel/accel.sh@16 -- # local accel_opc 00:06:19.402 02:03:33 -- accel/accel.sh@17 -- # local accel_module 00:06:19.402 02:03:33 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:06:19.402 02:03:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:19.402 02:03:33 -- accel/accel.sh@12 -- # build_accel_config 00:06:19.402 02:03:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:19.402 02:03:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.402 02:03:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.402 02:03:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:19.402 02:03:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:19.402 02:03:33 -- accel/accel.sh@41 -- # local IFS=, 00:06:19.402 02:03:33 -- accel/accel.sh@42 -- # jq -r . 00:06:19.402 [2024-05-14 02:03:33.805720] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:19.402 [2024-05-14 02:03:33.805837] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58892 ] 00:06:19.402 [2024-05-14 02:03:33.936418] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.664 [2024-05-14 02:03:34.003214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.617 02:03:35 -- accel/accel.sh@18 -- # out=' 00:06:20.617 SPDK Configuration: 00:06:20.617 Core mask: 0x1 00:06:20.617 00:06:20.617 Accel Perf Configuration: 00:06:20.617 Workload Type: xor 00:06:20.617 Source buffers: 3 00:06:20.617 Transfer size: 4096 bytes 00:06:20.617 Vector count 1 00:06:20.617 Module: software 00:06:20.617 Queue depth: 32 00:06:20.617 Allocate depth: 32 00:06:20.617 # threads/core: 1 00:06:20.617 Run time: 1 seconds 00:06:20.617 Verify: Yes 00:06:20.617 00:06:20.617 Running for 1 seconds... 00:06:20.617 00:06:20.617 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:20.617 ------------------------------------------------------------------------------------ 00:06:20.617 0,0 224512/s 877 MiB/s 0 0 00:06:20.617 ==================================================================================== 00:06:20.617 Total 224512/s 877 MiB/s 0 0' 00:06:20.617 02:03:35 -- accel/accel.sh@20 -- # IFS=: 00:06:20.617 02:03:35 -- accel/accel.sh@20 -- # read -r var val 00:06:20.617 02:03:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:20.617 02:03:35 -- accel/accel.sh@12 -- # build_accel_config 00:06:20.617 02:03:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:20.617 02:03:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:20.617 02:03:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.617 02:03:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.617 02:03:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:20.617 02:03:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:20.617 02:03:35 -- accel/accel.sh@41 -- # local IFS=, 00:06:20.617 02:03:35 -- accel/accel.sh@42 -- # jq -r . 00:06:20.617 [2024-05-14 02:03:35.194188] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:20.617 [2024-05-14 02:03:35.194293] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58914 ] 00:06:20.875 [2024-05-14 02:03:35.336868] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.875 [2024-05-14 02:03:35.394593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.875 02:03:35 -- accel/accel.sh@21 -- # val= 00:06:20.875 02:03:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.875 02:03:35 -- accel/accel.sh@20 -- # IFS=: 00:06:20.875 02:03:35 -- accel/accel.sh@20 -- # read -r var val 00:06:20.875 02:03:35 -- accel/accel.sh@21 -- # val= 00:06:20.875 02:03:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.875 02:03:35 -- accel/accel.sh@20 -- # IFS=: 00:06:20.875 02:03:35 -- accel/accel.sh@20 -- # read -r var val 00:06:20.875 02:03:35 -- accel/accel.sh@21 -- # val=0x1 00:06:20.875 02:03:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.875 02:03:35 -- accel/accel.sh@20 -- # IFS=: 00:06:20.875 02:03:35 -- accel/accel.sh@20 -- # read -r var val 00:06:20.875 02:03:35 -- accel/accel.sh@21 -- # val= 00:06:20.875 02:03:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.875 02:03:35 -- accel/accel.sh@20 -- # IFS=: 00:06:20.875 02:03:35 -- accel/accel.sh@20 -- # read -r var val 00:06:20.875 02:03:35 -- accel/accel.sh@21 -- # val= 00:06:20.875 02:03:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.875 02:03:35 -- accel/accel.sh@20 -- # IFS=: 00:06:20.875 02:03:35 -- accel/accel.sh@20 -- # read -r var val 00:06:20.875 02:03:35 -- accel/accel.sh@21 -- # val=xor 00:06:20.875 02:03:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.875 02:03:35 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:20.875 02:03:35 -- accel/accel.sh@20 -- # IFS=: 00:06:20.875 02:03:35 -- accel/accel.sh@20 -- # read -r var val 00:06:20.875 02:03:35 -- accel/accel.sh@21 -- # val=3 00:06:20.875 02:03:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.875 02:03:35 -- accel/accel.sh@20 -- # IFS=: 00:06:20.875 02:03:35 -- accel/accel.sh@20 -- # read -r var val 00:06:20.875 02:03:35 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:20.875 02:03:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.875 02:03:35 -- accel/accel.sh@20 -- # IFS=: 00:06:20.875 02:03:35 -- accel/accel.sh@20 -- # read -r var val 00:06:20.875 02:03:35 -- accel/accel.sh@21 -- # val= 00:06:20.875 02:03:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.875 02:03:35 -- accel/accel.sh@20 -- # IFS=: 00:06:20.875 02:03:35 -- accel/accel.sh@20 -- # read -r var val 00:06:20.875 02:03:35 -- accel/accel.sh@21 -- # val=software 00:06:20.875 02:03:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.875 02:03:35 -- accel/accel.sh@23 -- # accel_module=software 00:06:20.875 02:03:35 -- accel/accel.sh@20 -- # IFS=: 00:06:20.875 02:03:35 -- accel/accel.sh@20 -- # read -r var val 00:06:20.875 02:03:35 -- accel/accel.sh@21 -- # val=32 00:06:20.875 02:03:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.875 02:03:35 -- accel/accel.sh@20 -- # IFS=: 00:06:20.875 02:03:35 -- accel/accel.sh@20 -- # read -r var val 00:06:20.875 02:03:35 -- accel/accel.sh@21 -- # val=32 00:06:20.875 02:03:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.875 02:03:35 -- accel/accel.sh@20 -- # IFS=: 00:06:20.875 02:03:35 -- accel/accel.sh@20 -- # read -r var val 00:06:20.875 02:03:35 -- accel/accel.sh@21 -- # val=1 00:06:20.875 02:03:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.875 02:03:35 -- accel/accel.sh@20 -- # IFS=: 00:06:20.875 02:03:35 -- accel/accel.sh@20 -- # read -r var val 00:06:20.875 02:03:35 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:20.875 02:03:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.875 02:03:35 -- accel/accel.sh@20 -- # IFS=: 00:06:20.875 02:03:35 -- accel/accel.sh@20 -- # read -r var val 00:06:20.875 02:03:35 -- accel/accel.sh@21 -- # val=Yes 00:06:20.875 02:03:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.875 02:03:35 -- accel/accel.sh@20 -- # IFS=: 00:06:20.875 02:03:35 -- accel/accel.sh@20 -- # read -r var val 00:06:20.875 02:03:35 -- accel/accel.sh@21 -- # val= 00:06:20.875 02:03:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.875 02:03:35 -- accel/accel.sh@20 -- # IFS=: 00:06:20.875 02:03:35 -- accel/accel.sh@20 -- # read -r var val 00:06:20.875 02:03:35 -- accel/accel.sh@21 -- # val= 00:06:20.875 02:03:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.875 02:03:35 -- accel/accel.sh@20 -- # IFS=: 00:06:20.875 02:03:35 -- accel/accel.sh@20 -- # read -r var val 00:06:22.248 02:03:36 -- accel/accel.sh@21 -- # val= 00:06:22.248 02:03:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.248 02:03:36 -- accel/accel.sh@20 -- # IFS=: 00:06:22.248 02:03:36 -- accel/accel.sh@20 -- # read -r var val 00:06:22.248 02:03:36 -- accel/accel.sh@21 -- # val= 00:06:22.248 02:03:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.248 02:03:36 -- accel/accel.sh@20 -- # IFS=: 00:06:22.248 02:03:36 -- accel/accel.sh@20 -- # read -r var val 00:06:22.248 02:03:36 -- accel/accel.sh@21 -- # val= 00:06:22.248 02:03:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.248 02:03:36 -- accel/accel.sh@20 -- # IFS=: 00:06:22.248 02:03:36 -- accel/accel.sh@20 -- # read -r var val 00:06:22.248 02:03:36 -- accel/accel.sh@21 -- # val= 00:06:22.248 02:03:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.248 02:03:36 -- accel/accel.sh@20 -- # IFS=: 00:06:22.248 02:03:36 -- accel/accel.sh@20 -- # read -r var val 00:06:22.248 ************************************ 00:06:22.248 END TEST accel_xor 00:06:22.248 ************************************ 00:06:22.248 02:03:36 -- accel/accel.sh@21 -- # val= 00:06:22.248 02:03:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.248 02:03:36 -- accel/accel.sh@20 -- # IFS=: 00:06:22.248 02:03:36 -- accel/accel.sh@20 -- # read -r var val 00:06:22.248 02:03:36 -- accel/accel.sh@21 -- # val= 00:06:22.248 02:03:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.248 02:03:36 -- accel/accel.sh@20 -- # IFS=: 00:06:22.248 02:03:36 -- accel/accel.sh@20 -- # read -r var val 00:06:22.248 02:03:36 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:22.248 02:03:36 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:22.248 02:03:36 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:22.248 00:06:22.248 real 0m2.780s 00:06:22.248 user 0m2.427s 00:06:22.248 sys 0m0.145s 00:06:22.248 02:03:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.248 02:03:36 -- common/autotest_common.sh@10 -- # set +x 00:06:22.248 02:03:36 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:22.248 02:03:36 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:06:22.248 02:03:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:22.248 02:03:36 -- common/autotest_common.sh@10 -- # set +x 00:06:22.248 ************************************ 00:06:22.248 START TEST accel_dif_verify 00:06:22.248 ************************************ 00:06:22.248 02:03:36 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:06:22.248 02:03:36 -- accel/accel.sh@16 -- # local accel_opc 00:06:22.248 02:03:36 -- accel/accel.sh@17 -- # local accel_module 00:06:22.248 02:03:36 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:06:22.248 02:03:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:22.248 02:03:36 -- accel/accel.sh@12 -- # build_accel_config 00:06:22.248 02:03:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:22.248 02:03:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.248 02:03:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.248 02:03:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:22.248 02:03:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:22.248 02:03:36 -- accel/accel.sh@41 -- # local IFS=, 00:06:22.248 02:03:36 -- accel/accel.sh@42 -- # jq -r . 00:06:22.248 [2024-05-14 02:03:36.628211] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:22.248 [2024-05-14 02:03:36.628613] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58948 ] 00:06:22.248 [2024-05-14 02:03:36.767727] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.506 [2024-05-14 02:03:36.853036] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.877 02:03:38 -- accel/accel.sh@18 -- # out=' 00:06:23.877 SPDK Configuration: 00:06:23.877 Core mask: 0x1 00:06:23.877 00:06:23.877 Accel Perf Configuration: 00:06:23.877 Workload Type: dif_verify 00:06:23.877 Vector size: 4096 bytes 00:06:23.877 Transfer size: 4096 bytes 00:06:23.877 Block size: 512 bytes 00:06:23.877 Metadata size: 8 bytes 00:06:23.877 Vector count 1 00:06:23.877 Module: software 00:06:23.877 Queue depth: 32 00:06:23.877 Allocate depth: 32 00:06:23.877 # threads/core: 1 00:06:23.877 Run time: 1 seconds 00:06:23.877 Verify: No 00:06:23.877 00:06:23.877 Running for 1 seconds... 00:06:23.877 00:06:23.877 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:23.877 ------------------------------------------------------------------------------------ 00:06:23.877 0,0 92160/s 365 MiB/s 0 0 00:06:23.877 ==================================================================================== 00:06:23.877 Total 92160/s 360 MiB/s 0 0' 00:06:23.877 02:03:38 -- accel/accel.sh@20 -- # IFS=: 00:06:23.877 02:03:38 -- accel/accel.sh@20 -- # read -r var val 00:06:23.877 02:03:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:23.877 02:03:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:23.877 02:03:38 -- accel/accel.sh@12 -- # build_accel_config 00:06:23.877 02:03:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:23.877 02:03:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.877 02:03:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.877 02:03:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:23.877 02:03:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:23.877 02:03:38 -- accel/accel.sh@41 -- # local IFS=, 00:06:23.877 02:03:38 -- accel/accel.sh@42 -- # jq -r . 00:06:23.877 [2024-05-14 02:03:38.056996] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:23.877 [2024-05-14 02:03:38.057119] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58962 ] 00:06:23.877 [2024-05-14 02:03:38.225391] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.877 [2024-05-14 02:03:38.307881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.877 02:03:38 -- accel/accel.sh@21 -- # val= 00:06:23.877 02:03:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.877 02:03:38 -- accel/accel.sh@20 -- # IFS=: 00:06:23.877 02:03:38 -- accel/accel.sh@20 -- # read -r var val 00:06:23.877 02:03:38 -- accel/accel.sh@21 -- # val= 00:06:23.877 02:03:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.877 02:03:38 -- accel/accel.sh@20 -- # IFS=: 00:06:23.877 02:03:38 -- accel/accel.sh@20 -- # read -r var val 00:06:23.877 02:03:38 -- accel/accel.sh@21 -- # val=0x1 00:06:23.877 02:03:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.877 02:03:38 -- accel/accel.sh@20 -- # IFS=: 00:06:23.877 02:03:38 -- accel/accel.sh@20 -- # read -r var val 00:06:23.877 02:03:38 -- accel/accel.sh@21 -- # val= 00:06:23.877 02:03:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.877 02:03:38 -- accel/accel.sh@20 -- # IFS=: 00:06:23.877 02:03:38 -- accel/accel.sh@20 -- # read -r var val 00:06:23.877 02:03:38 -- accel/accel.sh@21 -- # val= 00:06:23.877 02:03:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.877 02:03:38 -- accel/accel.sh@20 -- # IFS=: 00:06:23.877 02:03:38 -- accel/accel.sh@20 -- # read -r var val 00:06:23.877 02:03:38 -- accel/accel.sh@21 -- # val=dif_verify 00:06:23.877 02:03:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.877 02:03:38 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:06:23.877 02:03:38 -- accel/accel.sh@20 -- # IFS=: 00:06:23.877 02:03:38 -- accel/accel.sh@20 -- # read -r var val 00:06:23.877 02:03:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:23.877 02:03:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.877 02:03:38 -- accel/accel.sh@20 -- # IFS=: 00:06:23.877 02:03:38 -- accel/accel.sh@20 -- # read -r var val 00:06:23.877 02:03:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:23.877 02:03:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.877 02:03:38 -- accel/accel.sh@20 -- # IFS=: 00:06:23.877 02:03:38 -- accel/accel.sh@20 -- # read -r var val 00:06:23.877 02:03:38 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:23.878 02:03:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.878 02:03:38 -- accel/accel.sh@20 -- # IFS=: 00:06:23.878 02:03:38 -- accel/accel.sh@20 -- # read -r var val 00:06:23.878 02:03:38 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:23.878 02:03:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.878 02:03:38 -- accel/accel.sh@20 -- # IFS=: 00:06:23.878 02:03:38 -- accel/accel.sh@20 -- # read -r var val 00:06:23.878 02:03:38 -- accel/accel.sh@21 -- # val= 00:06:23.878 02:03:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.878 02:03:38 -- accel/accel.sh@20 -- # IFS=: 00:06:23.878 02:03:38 -- accel/accel.sh@20 -- # read -r var val 00:06:23.878 02:03:38 -- accel/accel.sh@21 -- # val=software 00:06:23.878 02:03:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.878 02:03:38 -- accel/accel.sh@23 -- # accel_module=software 00:06:23.878 02:03:38 -- accel/accel.sh@20 -- # IFS=: 00:06:23.878 02:03:38 -- accel/accel.sh@20 -- # read -r var val 00:06:23.878 02:03:38 -- accel/accel.sh@21 -- # val=32 00:06:23.878 02:03:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.878 02:03:38 -- accel/accel.sh@20 -- # IFS=: 00:06:23.878 02:03:38 -- accel/accel.sh@20 -- # read -r var val 00:06:23.878 02:03:38 -- accel/accel.sh@21 -- # val=32 00:06:23.878 02:03:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.878 02:03:38 -- accel/accel.sh@20 -- # IFS=: 00:06:23.878 02:03:38 -- accel/accel.sh@20 -- # read -r var val 00:06:23.878 02:03:38 -- accel/accel.sh@21 -- # val=1 00:06:23.878 02:03:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.878 02:03:38 -- accel/accel.sh@20 -- # IFS=: 00:06:23.878 02:03:38 -- accel/accel.sh@20 -- # read -r var val 00:06:23.878 02:03:38 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:23.878 02:03:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.878 02:03:38 -- accel/accel.sh@20 -- # IFS=: 00:06:23.878 02:03:38 -- accel/accel.sh@20 -- # read -r var val 00:06:23.878 02:03:38 -- accel/accel.sh@21 -- # val=No 00:06:23.878 02:03:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.878 02:03:38 -- accel/accel.sh@20 -- # IFS=: 00:06:23.878 02:03:38 -- accel/accel.sh@20 -- # read -r var val 00:06:23.878 02:03:38 -- accel/accel.sh@21 -- # val= 00:06:23.878 02:03:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.878 02:03:38 -- accel/accel.sh@20 -- # IFS=: 00:06:23.878 02:03:38 -- accel/accel.sh@20 -- # read -r var val 00:06:23.878 02:03:38 -- accel/accel.sh@21 -- # val= 00:06:23.878 02:03:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.878 02:03:38 -- accel/accel.sh@20 -- # IFS=: 00:06:23.878 02:03:38 -- accel/accel.sh@20 -- # read -r var val 00:06:25.269 02:03:39 -- accel/accel.sh@21 -- # val= 00:06:25.269 02:03:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.269 02:03:39 -- accel/accel.sh@20 -- # IFS=: 00:06:25.269 02:03:39 -- accel/accel.sh@20 -- # read -r var val 00:06:25.269 02:03:39 -- accel/accel.sh@21 -- # val= 00:06:25.269 02:03:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.269 02:03:39 -- accel/accel.sh@20 -- # IFS=: 00:06:25.269 02:03:39 -- accel/accel.sh@20 -- # read -r var val 00:06:25.269 02:03:39 -- accel/accel.sh@21 -- # val= 00:06:25.269 02:03:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.269 02:03:39 -- accel/accel.sh@20 -- # IFS=: 00:06:25.269 02:03:39 -- accel/accel.sh@20 -- # read -r var val 00:06:25.269 02:03:39 -- accel/accel.sh@21 -- # val= 00:06:25.269 02:03:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.269 02:03:39 -- accel/accel.sh@20 -- # IFS=: 00:06:25.269 02:03:39 -- accel/accel.sh@20 -- # read -r var val 00:06:25.269 02:03:39 -- accel/accel.sh@21 -- # val= 00:06:25.269 02:03:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.269 02:03:39 -- accel/accel.sh@20 -- # IFS=: 00:06:25.269 02:03:39 -- accel/accel.sh@20 -- # read -r var val 00:06:25.269 02:03:39 -- accel/accel.sh@21 -- # val= 00:06:25.269 02:03:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.269 02:03:39 -- accel/accel.sh@20 -- # IFS=: 00:06:25.269 02:03:39 -- accel/accel.sh@20 -- # read -r var val 00:06:25.269 02:03:39 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:25.269 02:03:39 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:06:25.269 02:03:39 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:25.269 ************************************ 00:06:25.269 END TEST accel_dif_verify 00:06:25.270 ************************************ 00:06:25.270 00:06:25.270 real 0m2.884s 00:06:25.270 user 0m2.516s 00:06:25.270 sys 0m0.157s 00:06:25.270 02:03:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.270 02:03:39 -- common/autotest_common.sh@10 -- # set +x 00:06:25.270 02:03:39 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:25.270 02:03:39 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:06:25.270 02:03:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:25.270 02:03:39 -- common/autotest_common.sh@10 -- # set +x 00:06:25.270 ************************************ 00:06:25.270 START TEST accel_dif_generate 00:06:25.270 ************************************ 00:06:25.270 02:03:39 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:06:25.270 02:03:39 -- accel/accel.sh@16 -- # local accel_opc 00:06:25.270 02:03:39 -- accel/accel.sh@17 -- # local accel_module 00:06:25.270 02:03:39 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:06:25.270 02:03:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:25.270 02:03:39 -- accel/accel.sh@12 -- # build_accel_config 00:06:25.270 02:03:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:25.270 02:03:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.270 02:03:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.270 02:03:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:25.270 02:03:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:25.270 02:03:39 -- accel/accel.sh@41 -- # local IFS=, 00:06:25.270 02:03:39 -- accel/accel.sh@42 -- # jq -r . 00:06:25.270 [2024-05-14 02:03:39.557515] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:25.270 [2024-05-14 02:03:39.557623] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59002 ] 00:06:25.270 [2024-05-14 02:03:39.694236] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.270 [2024-05-14 02:03:39.752852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.647 02:03:40 -- accel/accel.sh@18 -- # out=' 00:06:26.647 SPDK Configuration: 00:06:26.647 Core mask: 0x1 00:06:26.647 00:06:26.647 Accel Perf Configuration: 00:06:26.647 Workload Type: dif_generate 00:06:26.647 Vector size: 4096 bytes 00:06:26.647 Transfer size: 4096 bytes 00:06:26.647 Block size: 512 bytes 00:06:26.647 Metadata size: 8 bytes 00:06:26.647 Vector count 1 00:06:26.647 Module: software 00:06:26.647 Queue depth: 32 00:06:26.647 Allocate depth: 32 00:06:26.647 # threads/core: 1 00:06:26.647 Run time: 1 seconds 00:06:26.647 Verify: No 00:06:26.647 00:06:26.647 Running for 1 seconds... 00:06:26.647 00:06:26.647 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:26.647 ------------------------------------------------------------------------------------ 00:06:26.647 0,0 112736/s 447 MiB/s 0 0 00:06:26.647 ==================================================================================== 00:06:26.647 Total 112736/s 440 MiB/s 0 0' 00:06:26.647 02:03:40 -- accel/accel.sh@20 -- # IFS=: 00:06:26.647 02:03:40 -- accel/accel.sh@20 -- # read -r var val 00:06:26.647 02:03:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:26.647 02:03:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:26.647 02:03:40 -- accel/accel.sh@12 -- # build_accel_config 00:06:26.647 02:03:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:26.647 02:03:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.647 02:03:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.647 02:03:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:26.647 02:03:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:26.647 02:03:40 -- accel/accel.sh@41 -- # local IFS=, 00:06:26.647 02:03:40 -- accel/accel.sh@42 -- # jq -r . 00:06:26.647 [2024-05-14 02:03:40.946271] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:26.647 [2024-05-14 02:03:40.946382] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59016 ] 00:06:26.647 [2024-05-14 02:03:41.086790] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.647 [2024-05-14 02:03:41.154022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.647 02:03:41 -- accel/accel.sh@21 -- # val= 00:06:26.647 02:03:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.647 02:03:41 -- accel/accel.sh@20 -- # IFS=: 00:06:26.647 02:03:41 -- accel/accel.sh@20 -- # read -r var val 00:06:26.647 02:03:41 -- accel/accel.sh@21 -- # val= 00:06:26.647 02:03:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.647 02:03:41 -- accel/accel.sh@20 -- # IFS=: 00:06:26.647 02:03:41 -- accel/accel.sh@20 -- # read -r var val 00:06:26.647 02:03:41 -- accel/accel.sh@21 -- # val=0x1 00:06:26.647 02:03:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.647 02:03:41 -- accel/accel.sh@20 -- # IFS=: 00:06:26.647 02:03:41 -- accel/accel.sh@20 -- # read -r var val 00:06:26.647 02:03:41 -- accel/accel.sh@21 -- # val= 00:06:26.647 02:03:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.647 02:03:41 -- accel/accel.sh@20 -- # IFS=: 00:06:26.647 02:03:41 -- accel/accel.sh@20 -- # read -r var val 00:06:26.647 02:03:41 -- accel/accel.sh@21 -- # val= 00:06:26.647 02:03:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.647 02:03:41 -- accel/accel.sh@20 -- # IFS=: 00:06:26.647 02:03:41 -- accel/accel.sh@20 -- # read -r var val 00:06:26.647 02:03:41 -- accel/accel.sh@21 -- # val=dif_generate 00:06:26.647 02:03:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.647 02:03:41 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:06:26.647 02:03:41 -- accel/accel.sh@20 -- # IFS=: 00:06:26.647 02:03:41 -- accel/accel.sh@20 -- # read -r var val 00:06:26.647 02:03:41 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:26.647 02:03:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.647 02:03:41 -- accel/accel.sh@20 -- # IFS=: 00:06:26.647 02:03:41 -- accel/accel.sh@20 -- # read -r var val 00:06:26.647 02:03:41 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:26.647 02:03:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.647 02:03:41 -- accel/accel.sh@20 -- # IFS=: 00:06:26.647 02:03:41 -- accel/accel.sh@20 -- # read -r var val 00:06:26.647 02:03:41 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:26.647 02:03:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.647 02:03:41 -- accel/accel.sh@20 -- # IFS=: 00:06:26.647 02:03:41 -- accel/accel.sh@20 -- # read -r var val 00:06:26.647 02:03:41 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:26.647 02:03:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.647 02:03:41 -- accel/accel.sh@20 -- # IFS=: 00:06:26.647 02:03:41 -- accel/accel.sh@20 -- # read -r var val 00:06:26.647 02:03:41 -- accel/accel.sh@21 -- # val= 00:06:26.647 02:03:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.647 02:03:41 -- accel/accel.sh@20 -- # IFS=: 00:06:26.647 02:03:41 -- accel/accel.sh@20 -- # read -r var val 00:06:26.647 02:03:41 -- accel/accel.sh@21 -- # val=software 00:06:26.647 02:03:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.647 02:03:41 -- accel/accel.sh@23 -- # accel_module=software 00:06:26.647 02:03:41 -- accel/accel.sh@20 -- # IFS=: 00:06:26.647 02:03:41 -- accel/accel.sh@20 -- # read -r var val 00:06:26.647 02:03:41 -- accel/accel.sh@21 -- # val=32 00:06:26.647 02:03:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.647 02:03:41 -- accel/accel.sh@20 -- # IFS=: 00:06:26.647 02:03:41 -- accel/accel.sh@20 -- # read -r var val 00:06:26.647 02:03:41 -- accel/accel.sh@21 -- # val=32 00:06:26.647 02:03:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.647 02:03:41 -- accel/accel.sh@20 -- # IFS=: 00:06:26.647 02:03:41 -- accel/accel.sh@20 -- # read -r var val 00:06:26.648 02:03:41 -- accel/accel.sh@21 -- # val=1 00:06:26.648 02:03:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.648 02:03:41 -- accel/accel.sh@20 -- # IFS=: 00:06:26.648 02:03:41 -- accel/accel.sh@20 -- # read -r var val 00:06:26.648 02:03:41 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:26.648 02:03:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.648 02:03:41 -- accel/accel.sh@20 -- # IFS=: 00:06:26.648 02:03:41 -- accel/accel.sh@20 -- # read -r var val 00:06:26.648 02:03:41 -- accel/accel.sh@21 -- # val=No 00:06:26.648 02:03:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.648 02:03:41 -- accel/accel.sh@20 -- # IFS=: 00:06:26.648 02:03:41 -- accel/accel.sh@20 -- # read -r var val 00:06:26.648 02:03:41 -- accel/accel.sh@21 -- # val= 00:06:26.648 02:03:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.648 02:03:41 -- accel/accel.sh@20 -- # IFS=: 00:06:26.648 02:03:41 -- accel/accel.sh@20 -- # read -r var val 00:06:26.648 02:03:41 -- accel/accel.sh@21 -- # val= 00:06:26.648 02:03:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.648 02:03:41 -- accel/accel.sh@20 -- # IFS=: 00:06:26.648 02:03:41 -- accel/accel.sh@20 -- # read -r var val 00:06:28.020 02:03:42 -- accel/accel.sh@21 -- # val= 00:06:28.020 02:03:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.020 02:03:42 -- accel/accel.sh@20 -- # IFS=: 00:06:28.020 02:03:42 -- accel/accel.sh@20 -- # read -r var val 00:06:28.020 02:03:42 -- accel/accel.sh@21 -- # val= 00:06:28.020 02:03:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.020 02:03:42 -- accel/accel.sh@20 -- # IFS=: 00:06:28.020 02:03:42 -- accel/accel.sh@20 -- # read -r var val 00:06:28.020 02:03:42 -- accel/accel.sh@21 -- # val= 00:06:28.020 02:03:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.020 02:03:42 -- accel/accel.sh@20 -- # IFS=: 00:06:28.020 02:03:42 -- accel/accel.sh@20 -- # read -r var val 00:06:28.020 02:03:42 -- accel/accel.sh@21 -- # val= 00:06:28.020 02:03:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.020 02:03:42 -- accel/accel.sh@20 -- # IFS=: 00:06:28.020 02:03:42 -- accel/accel.sh@20 -- # read -r var val 00:06:28.020 02:03:42 -- accel/accel.sh@21 -- # val= 00:06:28.020 02:03:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.020 02:03:42 -- accel/accel.sh@20 -- # IFS=: 00:06:28.020 02:03:42 -- accel/accel.sh@20 -- # read -r var val 00:06:28.020 02:03:42 -- accel/accel.sh@21 -- # val= 00:06:28.020 02:03:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.020 02:03:42 -- accel/accel.sh@20 -- # IFS=: 00:06:28.020 02:03:42 -- accel/accel.sh@20 -- # read -r var val 00:06:28.020 02:03:42 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:28.020 02:03:42 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:06:28.020 02:03:42 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:28.020 00:06:28.020 real 0m2.792s 00:06:28.020 user 0m2.439s 00:06:28.020 sys 0m0.149s 00:06:28.020 ************************************ 00:06:28.020 END TEST accel_dif_generate 00:06:28.020 ************************************ 00:06:28.020 02:03:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.020 02:03:42 -- common/autotest_common.sh@10 -- # set +x 00:06:28.020 02:03:42 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:28.020 02:03:42 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:06:28.020 02:03:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:28.020 02:03:42 -- common/autotest_common.sh@10 -- # set +x 00:06:28.020 ************************************ 00:06:28.020 START TEST accel_dif_generate_copy 00:06:28.020 ************************************ 00:06:28.020 02:03:42 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:06:28.020 02:03:42 -- accel/accel.sh@16 -- # local accel_opc 00:06:28.020 02:03:42 -- accel/accel.sh@17 -- # local accel_module 00:06:28.020 02:03:42 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:06:28.020 02:03:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:28.020 02:03:42 -- accel/accel.sh@12 -- # build_accel_config 00:06:28.020 02:03:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:28.020 02:03:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.020 02:03:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.020 02:03:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:28.020 02:03:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:28.020 02:03:42 -- accel/accel.sh@41 -- # local IFS=, 00:06:28.020 02:03:42 -- accel/accel.sh@42 -- # jq -r . 00:06:28.020 [2024-05-14 02:03:42.391862] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:28.020 [2024-05-14 02:03:42.391980] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59051 ] 00:06:28.020 [2024-05-14 02:03:42.535057] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.020 [2024-05-14 02:03:42.603030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.395 02:03:43 -- accel/accel.sh@18 -- # out=' 00:06:29.395 SPDK Configuration: 00:06:29.395 Core mask: 0x1 00:06:29.395 00:06:29.395 Accel Perf Configuration: 00:06:29.395 Workload Type: dif_generate_copy 00:06:29.395 Vector size: 4096 bytes 00:06:29.395 Transfer size: 4096 bytes 00:06:29.395 Vector count 1 00:06:29.395 Module: software 00:06:29.395 Queue depth: 32 00:06:29.395 Allocate depth: 32 00:06:29.395 # threads/core: 1 00:06:29.395 Run time: 1 seconds 00:06:29.395 Verify: No 00:06:29.395 00:06:29.395 Running for 1 seconds... 00:06:29.395 00:06:29.395 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:29.395 ------------------------------------------------------------------------------------ 00:06:29.395 0,0 85152/s 337 MiB/s 0 0 00:06:29.395 ==================================================================================== 00:06:29.395 Total 85152/s 332 MiB/s 0 0' 00:06:29.395 02:03:43 -- accel/accel.sh@20 -- # IFS=: 00:06:29.395 02:03:43 -- accel/accel.sh@20 -- # read -r var val 00:06:29.395 02:03:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:29.395 02:03:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:29.395 02:03:43 -- accel/accel.sh@12 -- # build_accel_config 00:06:29.395 02:03:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:29.395 02:03:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.395 02:03:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.395 02:03:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:29.395 02:03:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:29.395 02:03:43 -- accel/accel.sh@41 -- # local IFS=, 00:06:29.395 02:03:43 -- accel/accel.sh@42 -- # jq -r . 00:06:29.395 [2024-05-14 02:03:43.791407] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:29.395 [2024-05-14 02:03:43.791974] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59070 ] 00:06:29.395 [2024-05-14 02:03:43.928496] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.653 [2024-05-14 02:03:43.985333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.653 02:03:44 -- accel/accel.sh@21 -- # val= 00:06:29.653 02:03:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.653 02:03:44 -- accel/accel.sh@20 -- # IFS=: 00:06:29.653 02:03:44 -- accel/accel.sh@20 -- # read -r var val 00:06:29.653 02:03:44 -- accel/accel.sh@21 -- # val= 00:06:29.653 02:03:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.653 02:03:44 -- accel/accel.sh@20 -- # IFS=: 00:06:29.654 02:03:44 -- accel/accel.sh@20 -- # read -r var val 00:06:29.654 02:03:44 -- accel/accel.sh@21 -- # val=0x1 00:06:29.654 02:03:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.654 02:03:44 -- accel/accel.sh@20 -- # IFS=: 00:06:29.654 02:03:44 -- accel/accel.sh@20 -- # read -r var val 00:06:29.654 02:03:44 -- accel/accel.sh@21 -- # val= 00:06:29.654 02:03:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.654 02:03:44 -- accel/accel.sh@20 -- # IFS=: 00:06:29.654 02:03:44 -- accel/accel.sh@20 -- # read -r var val 00:06:29.654 02:03:44 -- accel/accel.sh@21 -- # val= 00:06:29.654 02:03:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.654 02:03:44 -- accel/accel.sh@20 -- # IFS=: 00:06:29.654 02:03:44 -- accel/accel.sh@20 -- # read -r var val 00:06:29.654 02:03:44 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:06:29.654 02:03:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.654 02:03:44 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:06:29.654 02:03:44 -- accel/accel.sh@20 -- # IFS=: 00:06:29.654 02:03:44 -- accel/accel.sh@20 -- # read -r var val 00:06:29.654 02:03:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:29.654 02:03:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.654 02:03:44 -- accel/accel.sh@20 -- # IFS=: 00:06:29.654 02:03:44 -- accel/accel.sh@20 -- # read -r var val 00:06:29.654 02:03:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:29.654 02:03:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.654 02:03:44 -- accel/accel.sh@20 -- # IFS=: 00:06:29.654 02:03:44 -- accel/accel.sh@20 -- # read -r var val 00:06:29.654 02:03:44 -- accel/accel.sh@21 -- # val= 00:06:29.654 02:03:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.654 02:03:44 -- accel/accel.sh@20 -- # IFS=: 00:06:29.654 02:03:44 -- accel/accel.sh@20 -- # read -r var val 00:06:29.654 02:03:44 -- accel/accel.sh@21 -- # val=software 00:06:29.654 02:03:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.654 02:03:44 -- accel/accel.sh@23 -- # accel_module=software 00:06:29.654 02:03:44 -- accel/accel.sh@20 -- # IFS=: 00:06:29.654 02:03:44 -- accel/accel.sh@20 -- # read -r var val 00:06:29.654 02:03:44 -- accel/accel.sh@21 -- # val=32 00:06:29.654 02:03:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.654 02:03:44 -- accel/accel.sh@20 -- # IFS=: 00:06:29.654 02:03:44 -- accel/accel.sh@20 -- # read -r var val 00:06:29.654 02:03:44 -- accel/accel.sh@21 -- # val=32 00:06:29.654 02:03:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.654 02:03:44 -- accel/accel.sh@20 -- # IFS=: 00:06:29.654 02:03:44 -- accel/accel.sh@20 -- # read -r var val 00:06:29.654 02:03:44 -- accel/accel.sh@21 -- # val=1 00:06:29.654 02:03:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.654 02:03:44 -- accel/accel.sh@20 -- # IFS=: 00:06:29.654 02:03:44 -- accel/accel.sh@20 -- # read -r var val 00:06:29.654 02:03:44 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:29.654 02:03:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.654 02:03:44 -- accel/accel.sh@20 -- # IFS=: 00:06:29.654 02:03:44 -- accel/accel.sh@20 -- # read -r var val 00:06:29.654 02:03:44 -- accel/accel.sh@21 -- # val=No 00:06:29.654 02:03:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.654 02:03:44 -- accel/accel.sh@20 -- # IFS=: 00:06:29.654 02:03:44 -- accel/accel.sh@20 -- # read -r var val 00:06:29.654 02:03:44 -- accel/accel.sh@21 -- # val= 00:06:29.654 02:03:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.654 02:03:44 -- accel/accel.sh@20 -- # IFS=: 00:06:29.654 02:03:44 -- accel/accel.sh@20 -- # read -r var val 00:06:29.654 02:03:44 -- accel/accel.sh@21 -- # val= 00:06:29.654 02:03:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.654 02:03:44 -- accel/accel.sh@20 -- # IFS=: 00:06:29.654 02:03:44 -- accel/accel.sh@20 -- # read -r var val 00:06:30.606 02:03:45 -- accel/accel.sh@21 -- # val= 00:06:30.606 02:03:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.606 02:03:45 -- accel/accel.sh@20 -- # IFS=: 00:06:30.606 02:03:45 -- accel/accel.sh@20 -- # read -r var val 00:06:30.606 02:03:45 -- accel/accel.sh@21 -- # val= 00:06:30.606 02:03:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.606 02:03:45 -- accel/accel.sh@20 -- # IFS=: 00:06:30.606 02:03:45 -- accel/accel.sh@20 -- # read -r var val 00:06:30.606 02:03:45 -- accel/accel.sh@21 -- # val= 00:06:30.606 02:03:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.606 02:03:45 -- accel/accel.sh@20 -- # IFS=: 00:06:30.606 02:03:45 -- accel/accel.sh@20 -- # read -r var val 00:06:30.606 02:03:45 -- accel/accel.sh@21 -- # val= 00:06:30.606 02:03:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.606 02:03:45 -- accel/accel.sh@20 -- # IFS=: 00:06:30.606 02:03:45 -- accel/accel.sh@20 -- # read -r var val 00:06:30.606 02:03:45 -- accel/accel.sh@21 -- # val= 00:06:30.606 02:03:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.606 02:03:45 -- accel/accel.sh@20 -- # IFS=: 00:06:30.606 02:03:45 -- accel/accel.sh@20 -- # read -r var val 00:06:30.606 02:03:45 -- accel/accel.sh@21 -- # val= 00:06:30.606 02:03:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.606 02:03:45 -- accel/accel.sh@20 -- # IFS=: 00:06:30.606 ************************************ 00:06:30.606 END TEST accel_dif_generate_copy 00:06:30.606 ************************************ 00:06:30.606 02:03:45 -- accel/accel.sh@20 -- # read -r var val 00:06:30.606 02:03:45 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:30.606 02:03:45 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:06:30.606 02:03:45 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:30.606 00:06:30.606 real 0m2.786s 00:06:30.606 user 0m2.455s 00:06:30.606 sys 0m0.127s 00:06:30.606 02:03:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.606 02:03:45 -- common/autotest_common.sh@10 -- # set +x 00:06:30.606 02:03:45 -- accel/accel.sh@107 -- # [[ y == y ]] 00:06:30.606 02:03:45 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:30.606 02:03:45 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:30.606 02:03:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:30.606 02:03:45 -- common/autotest_common.sh@10 -- # set +x 00:06:30.865 ************************************ 00:06:30.865 START TEST accel_comp 00:06:30.865 ************************************ 00:06:30.865 02:03:45 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:30.865 02:03:45 -- accel/accel.sh@16 -- # local accel_opc 00:06:30.865 02:03:45 -- accel/accel.sh@17 -- # local accel_module 00:06:30.865 02:03:45 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:30.865 02:03:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:30.865 02:03:45 -- accel/accel.sh@12 -- # build_accel_config 00:06:30.865 02:03:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:30.865 02:03:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.865 02:03:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.865 02:03:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:30.865 02:03:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:30.865 02:03:45 -- accel/accel.sh@41 -- # local IFS=, 00:06:30.865 02:03:45 -- accel/accel.sh@42 -- # jq -r . 00:06:30.865 [2024-05-14 02:03:45.225888] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:30.865 [2024-05-14 02:03:45.226898] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59099 ] 00:06:30.865 [2024-05-14 02:03:45.374177] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.865 [2024-05-14 02:03:45.442891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.237 02:03:46 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:32.237 00:06:32.237 SPDK Configuration: 00:06:32.237 Core mask: 0x1 00:06:32.237 00:06:32.237 Accel Perf Configuration: 00:06:32.237 Workload Type: compress 00:06:32.237 Transfer size: 4096 bytes 00:06:32.237 Vector count 1 00:06:32.237 Module: software 00:06:32.237 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:32.237 Queue depth: 32 00:06:32.237 Allocate depth: 32 00:06:32.237 # threads/core: 1 00:06:32.237 Run time: 1 seconds 00:06:32.237 Verify: No 00:06:32.237 00:06:32.237 Running for 1 seconds... 00:06:32.237 00:06:32.237 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:32.237 ------------------------------------------------------------------------------------ 00:06:32.237 0,0 42976/s 179 MiB/s 0 0 00:06:32.237 ==================================================================================== 00:06:32.237 Total 42976/s 167 MiB/s 0 0' 00:06:32.237 02:03:46 -- accel/accel.sh@20 -- # IFS=: 00:06:32.237 02:03:46 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:32.237 02:03:46 -- accel/accel.sh@20 -- # read -r var val 00:06:32.237 02:03:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:32.237 02:03:46 -- accel/accel.sh@12 -- # build_accel_config 00:06:32.237 02:03:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:32.237 02:03:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.237 02:03:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.237 02:03:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:32.237 02:03:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:32.237 02:03:46 -- accel/accel.sh@41 -- # local IFS=, 00:06:32.237 02:03:46 -- accel/accel.sh@42 -- # jq -r . 00:06:32.237 [2024-05-14 02:03:46.637910] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:32.237 [2024-05-14 02:03:46.637998] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59119 ] 00:06:32.237 [2024-05-14 02:03:46.771716] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.495 [2024-05-14 02:03:46.856239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.495 02:03:46 -- accel/accel.sh@21 -- # val= 00:06:32.495 02:03:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.495 02:03:46 -- accel/accel.sh@20 -- # IFS=: 00:06:32.495 02:03:46 -- accel/accel.sh@20 -- # read -r var val 00:06:32.495 02:03:46 -- accel/accel.sh@21 -- # val= 00:06:32.495 02:03:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.495 02:03:46 -- accel/accel.sh@20 -- # IFS=: 00:06:32.495 02:03:46 -- accel/accel.sh@20 -- # read -r var val 00:06:32.495 02:03:46 -- accel/accel.sh@21 -- # val= 00:06:32.495 02:03:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.495 02:03:46 -- accel/accel.sh@20 -- # IFS=: 00:06:32.495 02:03:46 -- accel/accel.sh@20 -- # read -r var val 00:06:32.495 02:03:46 -- accel/accel.sh@21 -- # val=0x1 00:06:32.495 02:03:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.495 02:03:46 -- accel/accel.sh@20 -- # IFS=: 00:06:32.495 02:03:46 -- accel/accel.sh@20 -- # read -r var val 00:06:32.495 02:03:46 -- accel/accel.sh@21 -- # val= 00:06:32.495 02:03:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.495 02:03:46 -- accel/accel.sh@20 -- # IFS=: 00:06:32.495 02:03:46 -- accel/accel.sh@20 -- # read -r var val 00:06:32.495 02:03:46 -- accel/accel.sh@21 -- # val= 00:06:32.495 02:03:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.495 02:03:46 -- accel/accel.sh@20 -- # IFS=: 00:06:32.496 02:03:46 -- accel/accel.sh@20 -- # read -r var val 00:06:32.496 02:03:46 -- accel/accel.sh@21 -- # val=compress 00:06:32.496 02:03:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.496 02:03:46 -- accel/accel.sh@24 -- # accel_opc=compress 00:06:32.496 02:03:46 -- accel/accel.sh@20 -- # IFS=: 00:06:32.496 02:03:46 -- accel/accel.sh@20 -- # read -r var val 00:06:32.496 02:03:46 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:32.496 02:03:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.496 02:03:46 -- accel/accel.sh@20 -- # IFS=: 00:06:32.496 02:03:46 -- accel/accel.sh@20 -- # read -r var val 00:06:32.496 02:03:46 -- accel/accel.sh@21 -- # val= 00:06:32.496 02:03:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.496 02:03:46 -- accel/accel.sh@20 -- # IFS=: 00:06:32.496 02:03:46 -- accel/accel.sh@20 -- # read -r var val 00:06:32.496 02:03:46 -- accel/accel.sh@21 -- # val=software 00:06:32.496 02:03:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.496 02:03:46 -- accel/accel.sh@23 -- # accel_module=software 00:06:32.496 02:03:46 -- accel/accel.sh@20 -- # IFS=: 00:06:32.496 02:03:46 -- accel/accel.sh@20 -- # read -r var val 00:06:32.496 02:03:46 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:32.496 02:03:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.496 02:03:46 -- accel/accel.sh@20 -- # IFS=: 00:06:32.496 02:03:46 -- accel/accel.sh@20 -- # read -r var val 00:06:32.496 02:03:46 -- accel/accel.sh@21 -- # val=32 00:06:32.496 02:03:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.496 02:03:46 -- accel/accel.sh@20 -- # IFS=: 00:06:32.496 02:03:46 -- accel/accel.sh@20 -- # read -r var val 00:06:32.496 02:03:46 -- accel/accel.sh@21 -- # val=32 00:06:32.496 02:03:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.496 02:03:46 -- accel/accel.sh@20 -- # IFS=: 00:06:32.496 02:03:46 -- accel/accel.sh@20 -- # read -r var val 00:06:32.496 02:03:46 -- accel/accel.sh@21 -- # val=1 00:06:32.496 02:03:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.496 02:03:46 -- accel/accel.sh@20 -- # IFS=: 00:06:32.496 02:03:46 -- accel/accel.sh@20 -- # read -r var val 00:06:32.496 02:03:46 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:32.496 02:03:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.496 02:03:46 -- accel/accel.sh@20 -- # IFS=: 00:06:32.496 02:03:46 -- accel/accel.sh@20 -- # read -r var val 00:06:32.496 02:03:46 -- accel/accel.sh@21 -- # val=No 00:06:32.496 02:03:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.496 02:03:46 -- accel/accel.sh@20 -- # IFS=: 00:06:32.496 02:03:46 -- accel/accel.sh@20 -- # read -r var val 00:06:32.496 02:03:46 -- accel/accel.sh@21 -- # val= 00:06:32.496 02:03:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.496 02:03:46 -- accel/accel.sh@20 -- # IFS=: 00:06:32.496 02:03:46 -- accel/accel.sh@20 -- # read -r var val 00:06:32.496 02:03:46 -- accel/accel.sh@21 -- # val= 00:06:32.496 02:03:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.496 02:03:46 -- accel/accel.sh@20 -- # IFS=: 00:06:32.496 02:03:46 -- accel/accel.sh@20 -- # read -r var val 00:06:33.870 02:03:48 -- accel/accel.sh@21 -- # val= 00:06:33.870 02:03:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.870 02:03:48 -- accel/accel.sh@20 -- # IFS=: 00:06:33.870 02:03:48 -- accel/accel.sh@20 -- # read -r var val 00:06:33.870 02:03:48 -- accel/accel.sh@21 -- # val= 00:06:33.870 02:03:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.870 02:03:48 -- accel/accel.sh@20 -- # IFS=: 00:06:33.870 02:03:48 -- accel/accel.sh@20 -- # read -r var val 00:06:33.870 02:03:48 -- accel/accel.sh@21 -- # val= 00:06:33.870 02:03:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.870 02:03:48 -- accel/accel.sh@20 -- # IFS=: 00:06:33.870 02:03:48 -- accel/accel.sh@20 -- # read -r var val 00:06:33.870 02:03:48 -- accel/accel.sh@21 -- # val= 00:06:33.870 02:03:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.870 02:03:48 -- accel/accel.sh@20 -- # IFS=: 00:06:33.870 02:03:48 -- accel/accel.sh@20 -- # read -r var val 00:06:33.870 02:03:48 -- accel/accel.sh@21 -- # val= 00:06:33.870 02:03:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.870 02:03:48 -- accel/accel.sh@20 -- # IFS=: 00:06:33.870 02:03:48 -- accel/accel.sh@20 -- # read -r var val 00:06:33.870 02:03:48 -- accel/accel.sh@21 -- # val= 00:06:33.870 02:03:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.870 02:03:48 -- accel/accel.sh@20 -- # IFS=: 00:06:33.870 02:03:48 -- accel/accel.sh@20 -- # read -r var val 00:06:33.870 02:03:48 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:33.870 02:03:48 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:06:33.870 02:03:48 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:33.870 00:06:33.870 real 0m2.842s 00:06:33.870 user 0m2.465s 00:06:33.870 sys 0m0.168s 00:06:33.870 ************************************ 00:06:33.870 END TEST accel_comp 00:06:33.870 ************************************ 00:06:33.870 02:03:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.870 02:03:48 -- common/autotest_common.sh@10 -- # set +x 00:06:33.870 02:03:48 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:33.870 02:03:48 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:33.870 02:03:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:33.870 02:03:48 -- common/autotest_common.sh@10 -- # set +x 00:06:33.870 ************************************ 00:06:33.870 START TEST accel_decomp 00:06:33.870 ************************************ 00:06:33.870 02:03:48 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:33.870 02:03:48 -- accel/accel.sh@16 -- # local accel_opc 00:06:33.870 02:03:48 -- accel/accel.sh@17 -- # local accel_module 00:06:33.870 02:03:48 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:33.870 02:03:48 -- accel/accel.sh@12 -- # build_accel_config 00:06:33.870 02:03:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:33.870 02:03:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:33.870 02:03:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.870 02:03:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.870 02:03:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:33.870 02:03:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:33.870 02:03:48 -- accel/accel.sh@41 -- # local IFS=, 00:06:33.870 02:03:48 -- accel/accel.sh@42 -- # jq -r . 00:06:33.870 [2024-05-14 02:03:48.110838] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:33.870 [2024-05-14 02:03:48.110923] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59153 ] 00:06:33.870 [2024-05-14 02:03:48.242919] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.870 [2024-05-14 02:03:48.320300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.244 02:03:49 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:35.244 00:06:35.244 SPDK Configuration: 00:06:35.244 Core mask: 0x1 00:06:35.244 00:06:35.244 Accel Perf Configuration: 00:06:35.244 Workload Type: decompress 00:06:35.244 Transfer size: 4096 bytes 00:06:35.244 Vector count 1 00:06:35.244 Module: software 00:06:35.244 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:35.244 Queue depth: 32 00:06:35.244 Allocate depth: 32 00:06:35.244 # threads/core: 1 00:06:35.244 Run time: 1 seconds 00:06:35.244 Verify: Yes 00:06:35.244 00:06:35.244 Running for 1 seconds... 00:06:35.244 00:06:35.244 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:35.244 ------------------------------------------------------------------------------------ 00:06:35.244 0,0 59936/s 110 MiB/s 0 0 00:06:35.244 ==================================================================================== 00:06:35.244 Total 59936/s 234 MiB/s 0 0' 00:06:35.244 02:03:49 -- accel/accel.sh@20 -- # IFS=: 00:06:35.244 02:03:49 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:35.244 02:03:49 -- accel/accel.sh@20 -- # read -r var val 00:06:35.244 02:03:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:35.244 02:03:49 -- accel/accel.sh@12 -- # build_accel_config 00:06:35.244 02:03:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:35.244 02:03:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.244 02:03:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.244 02:03:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:35.244 02:03:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:35.244 02:03:49 -- accel/accel.sh@41 -- # local IFS=, 00:06:35.244 02:03:49 -- accel/accel.sh@42 -- # jq -r . 00:06:35.245 [2024-05-14 02:03:49.534933] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:35.245 [2024-05-14 02:03:49.535056] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59173 ] 00:06:35.245 [2024-05-14 02:03:49.678385] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.245 [2024-05-14 02:03:49.747558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.245 02:03:49 -- accel/accel.sh@21 -- # val= 00:06:35.245 02:03:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.245 02:03:49 -- accel/accel.sh@20 -- # IFS=: 00:06:35.245 02:03:49 -- accel/accel.sh@20 -- # read -r var val 00:06:35.245 02:03:49 -- accel/accel.sh@21 -- # val= 00:06:35.245 02:03:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.245 02:03:49 -- accel/accel.sh@20 -- # IFS=: 00:06:35.245 02:03:49 -- accel/accel.sh@20 -- # read -r var val 00:06:35.245 02:03:49 -- accel/accel.sh@21 -- # val= 00:06:35.245 02:03:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.245 02:03:49 -- accel/accel.sh@20 -- # IFS=: 00:06:35.245 02:03:49 -- accel/accel.sh@20 -- # read -r var val 00:06:35.245 02:03:49 -- accel/accel.sh@21 -- # val=0x1 00:06:35.245 02:03:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.245 02:03:49 -- accel/accel.sh@20 -- # IFS=: 00:06:35.245 02:03:49 -- accel/accel.sh@20 -- # read -r var val 00:06:35.245 02:03:49 -- accel/accel.sh@21 -- # val= 00:06:35.245 02:03:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.245 02:03:49 -- accel/accel.sh@20 -- # IFS=: 00:06:35.245 02:03:49 -- accel/accel.sh@20 -- # read -r var val 00:06:35.245 02:03:49 -- accel/accel.sh@21 -- # val= 00:06:35.245 02:03:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.245 02:03:49 -- accel/accel.sh@20 -- # IFS=: 00:06:35.245 02:03:49 -- accel/accel.sh@20 -- # read -r var val 00:06:35.245 02:03:49 -- accel/accel.sh@21 -- # val=decompress 00:06:35.245 02:03:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.245 02:03:49 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:35.245 02:03:49 -- accel/accel.sh@20 -- # IFS=: 00:06:35.245 02:03:49 -- accel/accel.sh@20 -- # read -r var val 00:06:35.245 02:03:49 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:35.245 02:03:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.245 02:03:49 -- accel/accel.sh@20 -- # IFS=: 00:06:35.245 02:03:49 -- accel/accel.sh@20 -- # read -r var val 00:06:35.245 02:03:49 -- accel/accel.sh@21 -- # val= 00:06:35.245 02:03:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.245 02:03:49 -- accel/accel.sh@20 -- # IFS=: 00:06:35.245 02:03:49 -- accel/accel.sh@20 -- # read -r var val 00:06:35.245 02:03:49 -- accel/accel.sh@21 -- # val=software 00:06:35.245 02:03:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.245 02:03:49 -- accel/accel.sh@23 -- # accel_module=software 00:06:35.245 02:03:49 -- accel/accel.sh@20 -- # IFS=: 00:06:35.245 02:03:49 -- accel/accel.sh@20 -- # read -r var val 00:06:35.245 02:03:49 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:35.245 02:03:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.245 02:03:49 -- accel/accel.sh@20 -- # IFS=: 00:06:35.245 02:03:49 -- accel/accel.sh@20 -- # read -r var val 00:06:35.245 02:03:49 -- accel/accel.sh@21 -- # val=32 00:06:35.245 02:03:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.245 02:03:49 -- accel/accel.sh@20 -- # IFS=: 00:06:35.245 02:03:49 -- accel/accel.sh@20 -- # read -r var val 00:06:35.245 02:03:49 -- accel/accel.sh@21 -- # val=32 00:06:35.245 02:03:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.245 02:03:49 -- accel/accel.sh@20 -- # IFS=: 00:06:35.245 02:03:49 -- accel/accel.sh@20 -- # read -r var val 00:06:35.245 02:03:49 -- accel/accel.sh@21 -- # val=1 00:06:35.245 02:03:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.245 02:03:49 -- accel/accel.sh@20 -- # IFS=: 00:06:35.245 02:03:49 -- accel/accel.sh@20 -- # read -r var val 00:06:35.245 02:03:49 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:35.245 02:03:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.245 02:03:49 -- accel/accel.sh@20 -- # IFS=: 00:06:35.245 02:03:49 -- accel/accel.sh@20 -- # read -r var val 00:06:35.245 02:03:49 -- accel/accel.sh@21 -- # val=Yes 00:06:35.245 02:03:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.245 02:03:49 -- accel/accel.sh@20 -- # IFS=: 00:06:35.245 02:03:49 -- accel/accel.sh@20 -- # read -r var val 00:06:35.245 02:03:49 -- accel/accel.sh@21 -- # val= 00:06:35.245 02:03:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.245 02:03:49 -- accel/accel.sh@20 -- # IFS=: 00:06:35.245 02:03:49 -- accel/accel.sh@20 -- # read -r var val 00:06:35.245 02:03:49 -- accel/accel.sh@21 -- # val= 00:06:35.245 02:03:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.245 02:03:49 -- accel/accel.sh@20 -- # IFS=: 00:06:35.245 02:03:49 -- accel/accel.sh@20 -- # read -r var val 00:06:36.654 02:03:50 -- accel/accel.sh@21 -- # val= 00:06:36.654 02:03:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.654 02:03:50 -- accel/accel.sh@20 -- # IFS=: 00:06:36.654 02:03:50 -- accel/accel.sh@20 -- # read -r var val 00:06:36.654 02:03:50 -- accel/accel.sh@21 -- # val= 00:06:36.654 02:03:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.654 02:03:50 -- accel/accel.sh@20 -- # IFS=: 00:06:36.654 02:03:50 -- accel/accel.sh@20 -- # read -r var val 00:06:36.654 02:03:50 -- accel/accel.sh@21 -- # val= 00:06:36.654 02:03:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.654 02:03:50 -- accel/accel.sh@20 -- # IFS=: 00:06:36.654 02:03:50 -- accel/accel.sh@20 -- # read -r var val 00:06:36.654 02:03:50 -- accel/accel.sh@21 -- # val= 00:06:36.654 02:03:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.654 02:03:50 -- accel/accel.sh@20 -- # IFS=: 00:06:36.654 02:03:50 -- accel/accel.sh@20 -- # read -r var val 00:06:36.654 02:03:50 -- accel/accel.sh@21 -- # val= 00:06:36.654 02:03:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.654 02:03:50 -- accel/accel.sh@20 -- # IFS=: 00:06:36.654 02:03:50 -- accel/accel.sh@20 -- # read -r var val 00:06:36.654 02:03:50 -- accel/accel.sh@21 -- # val= 00:06:36.654 02:03:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.654 02:03:50 -- accel/accel.sh@20 -- # IFS=: 00:06:36.654 02:03:50 -- accel/accel.sh@20 -- # read -r var val 00:06:36.654 02:03:50 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:36.654 02:03:50 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:36.654 02:03:50 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:36.654 00:06:36.654 real 0m2.839s 00:06:36.654 user 0m2.473s 00:06:36.654 sys 0m0.160s 00:06:36.654 02:03:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.654 ************************************ 00:06:36.654 END TEST accel_decomp 00:06:36.654 ************************************ 00:06:36.654 02:03:50 -- common/autotest_common.sh@10 -- # set +x 00:06:36.654 02:03:50 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:36.654 02:03:50 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:06:36.654 02:03:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:36.654 02:03:50 -- common/autotest_common.sh@10 -- # set +x 00:06:36.654 ************************************ 00:06:36.654 START TEST accel_decmop_full 00:06:36.654 ************************************ 00:06:36.654 02:03:50 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:36.654 02:03:50 -- accel/accel.sh@16 -- # local accel_opc 00:06:36.654 02:03:50 -- accel/accel.sh@17 -- # local accel_module 00:06:36.654 02:03:50 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:36.654 02:03:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:36.654 02:03:50 -- accel/accel.sh@12 -- # build_accel_config 00:06:36.654 02:03:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:36.654 02:03:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.654 02:03:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.654 02:03:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:36.654 02:03:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:36.654 02:03:50 -- accel/accel.sh@41 -- # local IFS=, 00:06:36.654 02:03:50 -- accel/accel.sh@42 -- # jq -r . 00:06:36.654 [2024-05-14 02:03:50.994876] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:36.654 [2024-05-14 02:03:50.995394] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59206 ] 00:06:36.654 [2024-05-14 02:03:51.132015] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.654 [2024-05-14 02:03:51.200352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.028 02:03:52 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:38.028 00:06:38.028 SPDK Configuration: 00:06:38.028 Core mask: 0x1 00:06:38.028 00:06:38.028 Accel Perf Configuration: 00:06:38.028 Workload Type: decompress 00:06:38.028 Transfer size: 111250 bytes 00:06:38.028 Vector count 1 00:06:38.028 Module: software 00:06:38.028 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:38.028 Queue depth: 32 00:06:38.028 Allocate depth: 32 00:06:38.028 # threads/core: 1 00:06:38.028 Run time: 1 seconds 00:06:38.028 Verify: Yes 00:06:38.028 00:06:38.028 Running for 1 seconds... 00:06:38.028 00:06:38.028 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:38.028 ------------------------------------------------------------------------------------ 00:06:38.028 0,0 4128/s 170 MiB/s 0 0 00:06:38.028 ==================================================================================== 00:06:38.028 Total 4128/s 437 MiB/s 0 0' 00:06:38.028 02:03:52 -- accel/accel.sh@20 -- # IFS=: 00:06:38.028 02:03:52 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:38.028 02:03:52 -- accel/accel.sh@20 -- # read -r var val 00:06:38.028 02:03:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:38.028 02:03:52 -- accel/accel.sh@12 -- # build_accel_config 00:06:38.028 02:03:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:38.028 02:03:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.028 02:03:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.028 02:03:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:38.028 02:03:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:38.028 02:03:52 -- accel/accel.sh@41 -- # local IFS=, 00:06:38.028 02:03:52 -- accel/accel.sh@42 -- # jq -r . 00:06:38.028 [2024-05-14 02:03:52.414609] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:38.028 [2024-05-14 02:03:52.414741] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59227 ] 00:06:38.028 [2024-05-14 02:03:52.563096] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.287 [2024-05-14 02:03:52.621701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.287 02:03:52 -- accel/accel.sh@21 -- # val= 00:06:38.287 02:03:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.287 02:03:52 -- accel/accel.sh@20 -- # IFS=: 00:06:38.287 02:03:52 -- accel/accel.sh@20 -- # read -r var val 00:06:38.287 02:03:52 -- accel/accel.sh@21 -- # val= 00:06:38.287 02:03:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.287 02:03:52 -- accel/accel.sh@20 -- # IFS=: 00:06:38.287 02:03:52 -- accel/accel.sh@20 -- # read -r var val 00:06:38.287 02:03:52 -- accel/accel.sh@21 -- # val= 00:06:38.287 02:03:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.287 02:03:52 -- accel/accel.sh@20 -- # IFS=: 00:06:38.287 02:03:52 -- accel/accel.sh@20 -- # read -r var val 00:06:38.287 02:03:52 -- accel/accel.sh@21 -- # val=0x1 00:06:38.287 02:03:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.287 02:03:52 -- accel/accel.sh@20 -- # IFS=: 00:06:38.287 02:03:52 -- accel/accel.sh@20 -- # read -r var val 00:06:38.287 02:03:52 -- accel/accel.sh@21 -- # val= 00:06:38.287 02:03:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.287 02:03:52 -- accel/accel.sh@20 -- # IFS=: 00:06:38.287 02:03:52 -- accel/accel.sh@20 -- # read -r var val 00:06:38.287 02:03:52 -- accel/accel.sh@21 -- # val= 00:06:38.287 02:03:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.287 02:03:52 -- accel/accel.sh@20 -- # IFS=: 00:06:38.287 02:03:52 -- accel/accel.sh@20 -- # read -r var val 00:06:38.287 02:03:52 -- accel/accel.sh@21 -- # val=decompress 00:06:38.287 02:03:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.287 02:03:52 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:38.287 02:03:52 -- accel/accel.sh@20 -- # IFS=: 00:06:38.287 02:03:52 -- accel/accel.sh@20 -- # read -r var val 00:06:38.287 02:03:52 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:38.287 02:03:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.287 02:03:52 -- accel/accel.sh@20 -- # IFS=: 00:06:38.287 02:03:52 -- accel/accel.sh@20 -- # read -r var val 00:06:38.287 02:03:52 -- accel/accel.sh@21 -- # val= 00:06:38.287 02:03:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.287 02:03:52 -- accel/accel.sh@20 -- # IFS=: 00:06:38.287 02:03:52 -- accel/accel.sh@20 -- # read -r var val 00:06:38.287 02:03:52 -- accel/accel.sh@21 -- # val=software 00:06:38.287 02:03:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.287 02:03:52 -- accel/accel.sh@23 -- # accel_module=software 00:06:38.287 02:03:52 -- accel/accel.sh@20 -- # IFS=: 00:06:38.287 02:03:52 -- accel/accel.sh@20 -- # read -r var val 00:06:38.287 02:03:52 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:38.287 02:03:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.287 02:03:52 -- accel/accel.sh@20 -- # IFS=: 00:06:38.287 02:03:52 -- accel/accel.sh@20 -- # read -r var val 00:06:38.287 02:03:52 -- accel/accel.sh@21 -- # val=32 00:06:38.287 02:03:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.287 02:03:52 -- accel/accel.sh@20 -- # IFS=: 00:06:38.287 02:03:52 -- accel/accel.sh@20 -- # read -r var val 00:06:38.287 02:03:52 -- accel/accel.sh@21 -- # val=32 00:06:38.287 02:03:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.287 02:03:52 -- accel/accel.sh@20 -- # IFS=: 00:06:38.287 02:03:52 -- accel/accel.sh@20 -- # read -r var val 00:06:38.287 02:03:52 -- accel/accel.sh@21 -- # val=1 00:06:38.287 02:03:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.287 02:03:52 -- accel/accel.sh@20 -- # IFS=: 00:06:38.287 02:03:52 -- accel/accel.sh@20 -- # read -r var val 00:06:38.287 02:03:52 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:38.287 02:03:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.287 02:03:52 -- accel/accel.sh@20 -- # IFS=: 00:06:38.287 02:03:52 -- accel/accel.sh@20 -- # read -r var val 00:06:38.287 02:03:52 -- accel/accel.sh@21 -- # val=Yes 00:06:38.287 02:03:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.287 02:03:52 -- accel/accel.sh@20 -- # IFS=: 00:06:38.287 02:03:52 -- accel/accel.sh@20 -- # read -r var val 00:06:38.287 02:03:52 -- accel/accel.sh@21 -- # val= 00:06:38.287 02:03:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.287 02:03:52 -- accel/accel.sh@20 -- # IFS=: 00:06:38.287 02:03:52 -- accel/accel.sh@20 -- # read -r var val 00:06:38.287 02:03:52 -- accel/accel.sh@21 -- # val= 00:06:38.287 02:03:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.287 02:03:52 -- accel/accel.sh@20 -- # IFS=: 00:06:38.287 02:03:52 -- accel/accel.sh@20 -- # read -r var val 00:06:39.222 02:03:53 -- accel/accel.sh@21 -- # val= 00:06:39.222 02:03:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.222 02:03:53 -- accel/accel.sh@20 -- # IFS=: 00:06:39.222 02:03:53 -- accel/accel.sh@20 -- # read -r var val 00:06:39.222 02:03:53 -- accel/accel.sh@21 -- # val= 00:06:39.222 02:03:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.222 02:03:53 -- accel/accel.sh@20 -- # IFS=: 00:06:39.222 02:03:53 -- accel/accel.sh@20 -- # read -r var val 00:06:39.222 02:03:53 -- accel/accel.sh@21 -- # val= 00:06:39.222 02:03:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.222 02:03:53 -- accel/accel.sh@20 -- # IFS=: 00:06:39.222 02:03:53 -- accel/accel.sh@20 -- # read -r var val 00:06:39.222 02:03:53 -- accel/accel.sh@21 -- # val= 00:06:39.222 02:03:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.222 02:03:53 -- accel/accel.sh@20 -- # IFS=: 00:06:39.222 02:03:53 -- accel/accel.sh@20 -- # read -r var val 00:06:39.222 02:03:53 -- accel/accel.sh@21 -- # val= 00:06:39.222 02:03:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.222 02:03:53 -- accel/accel.sh@20 -- # IFS=: 00:06:39.222 02:03:53 -- accel/accel.sh@20 -- # read -r var val 00:06:39.222 02:03:53 -- accel/accel.sh@21 -- # val= 00:06:39.222 02:03:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.222 02:03:53 -- accel/accel.sh@20 -- # IFS=: 00:06:39.222 02:03:53 -- accel/accel.sh@20 -- # read -r var val 00:06:39.222 02:03:53 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:39.222 02:03:53 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:39.222 02:03:53 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:39.222 00:06:39.222 real 0m2.835s 00:06:39.222 user 0m2.463s 00:06:39.222 sys 0m0.165s 00:06:39.222 02:03:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.222 02:03:53 -- common/autotest_common.sh@10 -- # set +x 00:06:39.222 ************************************ 00:06:39.222 END TEST accel_decmop_full 00:06:39.222 ************************************ 00:06:39.480 02:03:53 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:39.480 02:03:53 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:06:39.480 02:03:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:39.480 02:03:53 -- common/autotest_common.sh@10 -- # set +x 00:06:39.480 ************************************ 00:06:39.480 START TEST accel_decomp_mcore 00:06:39.480 ************************************ 00:06:39.480 02:03:53 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:39.480 02:03:53 -- accel/accel.sh@16 -- # local accel_opc 00:06:39.480 02:03:53 -- accel/accel.sh@17 -- # local accel_module 00:06:39.480 02:03:53 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:39.480 02:03:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:39.480 02:03:53 -- accel/accel.sh@12 -- # build_accel_config 00:06:39.480 02:03:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:39.480 02:03:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.480 02:03:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.480 02:03:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:39.480 02:03:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:39.480 02:03:53 -- accel/accel.sh@41 -- # local IFS=, 00:06:39.480 02:03:53 -- accel/accel.sh@42 -- # jq -r . 00:06:39.480 [2024-05-14 02:03:53.876313] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:39.480 [2024-05-14 02:03:53.876417] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59256 ] 00:06:39.480 [2024-05-14 02:03:54.013692] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:39.742 [2024-05-14 02:03:54.084464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.742 [2024-05-14 02:03:54.084604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:39.742 [2024-05-14 02:03:54.084744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:39.742 [2024-05-14 02:03:54.084745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.114 02:03:55 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:41.114 00:06:41.114 SPDK Configuration: 00:06:41.114 Core mask: 0xf 00:06:41.114 00:06:41.114 Accel Perf Configuration: 00:06:41.114 Workload Type: decompress 00:06:41.114 Transfer size: 4096 bytes 00:06:41.114 Vector count 1 00:06:41.114 Module: software 00:06:41.114 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:41.114 Queue depth: 32 00:06:41.114 Allocate depth: 32 00:06:41.114 # threads/core: 1 00:06:41.114 Run time: 1 seconds 00:06:41.114 Verify: Yes 00:06:41.114 00:06:41.114 Running for 1 seconds... 00:06:41.114 00:06:41.114 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:41.114 ------------------------------------------------------------------------------------ 00:06:41.114 0,0 54368/s 100 MiB/s 0 0 00:06:41.114 3,0 53920/s 99 MiB/s 0 0 00:06:41.115 2,0 51264/s 94 MiB/s 0 0 00:06:41.115 1,0 50592/s 93 MiB/s 0 0 00:06:41.115 ==================================================================================== 00:06:41.115 Total 210144/s 820 MiB/s 0 0' 00:06:41.115 02:03:55 -- accel/accel.sh@20 -- # IFS=: 00:06:41.115 02:03:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:41.115 02:03:55 -- accel/accel.sh@20 -- # read -r var val 00:06:41.115 02:03:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:41.115 02:03:55 -- accel/accel.sh@12 -- # build_accel_config 00:06:41.115 02:03:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:41.115 02:03:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.115 02:03:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.115 02:03:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:41.115 02:03:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:41.115 02:03:55 -- accel/accel.sh@41 -- # local IFS=, 00:06:41.115 02:03:55 -- accel/accel.sh@42 -- # jq -r . 00:06:41.115 [2024-05-14 02:03:55.290516] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:41.115 [2024-05-14 02:03:55.290612] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59278 ] 00:06:41.115 [2024-05-14 02:03:55.424250] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:41.115 [2024-05-14 02:03:55.484433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:41.115 [2024-05-14 02:03:55.484511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:41.115 [2024-05-14 02:03:55.484651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:41.115 [2024-05-14 02:03:55.484654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.115 02:03:55 -- accel/accel.sh@21 -- # val= 00:06:41.115 02:03:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.115 02:03:55 -- accel/accel.sh@20 -- # IFS=: 00:06:41.115 02:03:55 -- accel/accel.sh@20 -- # read -r var val 00:06:41.115 02:03:55 -- accel/accel.sh@21 -- # val= 00:06:41.115 02:03:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.115 02:03:55 -- accel/accel.sh@20 -- # IFS=: 00:06:41.115 02:03:55 -- accel/accel.sh@20 -- # read -r var val 00:06:41.115 02:03:55 -- accel/accel.sh@21 -- # val= 00:06:41.115 02:03:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.115 02:03:55 -- accel/accel.sh@20 -- # IFS=: 00:06:41.115 02:03:55 -- accel/accel.sh@20 -- # read -r var val 00:06:41.115 02:03:55 -- accel/accel.sh@21 -- # val=0xf 00:06:41.115 02:03:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.115 02:03:55 -- accel/accel.sh@20 -- # IFS=: 00:06:41.115 02:03:55 -- accel/accel.sh@20 -- # read -r var val 00:06:41.115 02:03:55 -- accel/accel.sh@21 -- # val= 00:06:41.115 02:03:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.115 02:03:55 -- accel/accel.sh@20 -- # IFS=: 00:06:41.115 02:03:55 -- accel/accel.sh@20 -- # read -r var val 00:06:41.115 02:03:55 -- accel/accel.sh@21 -- # val= 00:06:41.115 02:03:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.115 02:03:55 -- accel/accel.sh@20 -- # IFS=: 00:06:41.115 02:03:55 -- accel/accel.sh@20 -- # read -r var val 00:06:41.115 02:03:55 -- accel/accel.sh@21 -- # val=decompress 00:06:41.115 02:03:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.115 02:03:55 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:41.115 02:03:55 -- accel/accel.sh@20 -- # IFS=: 00:06:41.115 02:03:55 -- accel/accel.sh@20 -- # read -r var val 00:06:41.115 02:03:55 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:41.115 02:03:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.115 02:03:55 -- accel/accel.sh@20 -- # IFS=: 00:06:41.115 02:03:55 -- accel/accel.sh@20 -- # read -r var val 00:06:41.115 02:03:55 -- accel/accel.sh@21 -- # val= 00:06:41.115 02:03:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.115 02:03:55 -- accel/accel.sh@20 -- # IFS=: 00:06:41.115 02:03:55 -- accel/accel.sh@20 -- # read -r var val 00:06:41.115 02:03:55 -- accel/accel.sh@21 -- # val=software 00:06:41.115 02:03:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.115 02:03:55 -- accel/accel.sh@23 -- # accel_module=software 00:06:41.115 02:03:55 -- accel/accel.sh@20 -- # IFS=: 00:06:41.115 02:03:55 -- accel/accel.sh@20 -- # read -r var val 00:06:41.115 02:03:55 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:41.115 02:03:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.115 02:03:55 -- accel/accel.sh@20 -- # IFS=: 00:06:41.115 02:03:55 -- accel/accel.sh@20 -- # read -r var val 00:06:41.115 02:03:55 -- accel/accel.sh@21 -- # val=32 00:06:41.115 02:03:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.115 02:03:55 -- accel/accel.sh@20 -- # IFS=: 00:06:41.115 02:03:55 -- accel/accel.sh@20 -- # read -r var val 00:06:41.115 02:03:55 -- accel/accel.sh@21 -- # val=32 00:06:41.115 02:03:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.115 02:03:55 -- accel/accel.sh@20 -- # IFS=: 00:06:41.115 02:03:55 -- accel/accel.sh@20 -- # read -r var val 00:06:41.115 02:03:55 -- accel/accel.sh@21 -- # val=1 00:06:41.115 02:03:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.115 02:03:55 -- accel/accel.sh@20 -- # IFS=: 00:06:41.115 02:03:55 -- accel/accel.sh@20 -- # read -r var val 00:06:41.115 02:03:55 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:41.115 02:03:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.115 02:03:55 -- accel/accel.sh@20 -- # IFS=: 00:06:41.115 02:03:55 -- accel/accel.sh@20 -- # read -r var val 00:06:41.115 02:03:55 -- accel/accel.sh@21 -- # val=Yes 00:06:41.115 02:03:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.115 02:03:55 -- accel/accel.sh@20 -- # IFS=: 00:06:41.115 02:03:55 -- accel/accel.sh@20 -- # read -r var val 00:06:41.115 02:03:55 -- accel/accel.sh@21 -- # val= 00:06:41.115 02:03:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.115 02:03:55 -- accel/accel.sh@20 -- # IFS=: 00:06:41.115 02:03:55 -- accel/accel.sh@20 -- # read -r var val 00:06:41.115 02:03:55 -- accel/accel.sh@21 -- # val= 00:06:41.115 02:03:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.115 02:03:55 -- accel/accel.sh@20 -- # IFS=: 00:06:41.115 02:03:55 -- accel/accel.sh@20 -- # read -r var val 00:06:42.488 02:03:56 -- accel/accel.sh@21 -- # val= 00:06:42.488 02:03:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.488 02:03:56 -- accel/accel.sh@20 -- # IFS=: 00:06:42.488 02:03:56 -- accel/accel.sh@20 -- # read -r var val 00:06:42.488 02:03:56 -- accel/accel.sh@21 -- # val= 00:06:42.488 02:03:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.488 02:03:56 -- accel/accel.sh@20 -- # IFS=: 00:06:42.488 02:03:56 -- accel/accel.sh@20 -- # read -r var val 00:06:42.488 02:03:56 -- accel/accel.sh@21 -- # val= 00:06:42.488 02:03:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.488 02:03:56 -- accel/accel.sh@20 -- # IFS=: 00:06:42.488 02:03:56 -- accel/accel.sh@20 -- # read -r var val 00:06:42.488 02:03:56 -- accel/accel.sh@21 -- # val= 00:06:42.488 02:03:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.488 02:03:56 -- accel/accel.sh@20 -- # IFS=: 00:06:42.488 02:03:56 -- accel/accel.sh@20 -- # read -r var val 00:06:42.488 02:03:56 -- accel/accel.sh@21 -- # val= 00:06:42.488 02:03:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.488 02:03:56 -- accel/accel.sh@20 -- # IFS=: 00:06:42.488 02:03:56 -- accel/accel.sh@20 -- # read -r var val 00:06:42.488 02:03:56 -- accel/accel.sh@21 -- # val= 00:06:42.488 02:03:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.488 02:03:56 -- accel/accel.sh@20 -- # IFS=: 00:06:42.488 02:03:56 -- accel/accel.sh@20 -- # read -r var val 00:06:42.488 02:03:56 -- accel/accel.sh@21 -- # val= 00:06:42.488 02:03:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.488 02:03:56 -- accel/accel.sh@20 -- # IFS=: 00:06:42.488 02:03:56 -- accel/accel.sh@20 -- # read -r var val 00:06:42.488 02:03:56 -- accel/accel.sh@21 -- # val= 00:06:42.488 02:03:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.488 02:03:56 -- accel/accel.sh@20 -- # IFS=: 00:06:42.488 02:03:56 -- accel/accel.sh@20 -- # read -r var val 00:06:42.488 02:03:56 -- accel/accel.sh@21 -- # val= 00:06:42.488 02:03:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.488 02:03:56 -- accel/accel.sh@20 -- # IFS=: 00:06:42.488 02:03:56 -- accel/accel.sh@20 -- # read -r var val 00:06:42.488 02:03:56 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:42.488 02:03:56 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:42.488 02:03:56 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:42.488 00:06:42.488 real 0m2.821s 00:06:42.488 user 0m4.469s 00:06:42.488 sys 0m0.085s 00:06:42.488 02:03:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.488 ************************************ 00:06:42.488 END TEST accel_decomp_mcore 00:06:42.488 ************************************ 00:06:42.488 02:03:56 -- common/autotest_common.sh@10 -- # set +x 00:06:42.488 02:03:56 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:42.488 02:03:56 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:06:42.488 02:03:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:42.488 02:03:56 -- common/autotest_common.sh@10 -- # set +x 00:06:42.488 ************************************ 00:06:42.488 START TEST accel_decomp_full_mcore 00:06:42.488 ************************************ 00:06:42.488 02:03:56 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:42.488 02:03:56 -- accel/accel.sh@16 -- # local accel_opc 00:06:42.488 02:03:56 -- accel/accel.sh@17 -- # local accel_module 00:06:42.488 02:03:56 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:42.488 02:03:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:42.488 02:03:56 -- accel/accel.sh@12 -- # build_accel_config 00:06:42.488 02:03:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:42.488 02:03:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.488 02:03:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.488 02:03:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:42.488 02:03:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:42.488 02:03:56 -- accel/accel.sh@41 -- # local IFS=, 00:06:42.488 02:03:56 -- accel/accel.sh@42 -- # jq -r . 00:06:42.488 [2024-05-14 02:03:56.731905] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:42.488 [2024-05-14 02:03:56.732016] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59316 ] 00:06:42.488 [2024-05-14 02:03:56.870682] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:42.488 [2024-05-14 02:03:56.969175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.488 [2024-05-14 02:03:56.969267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:42.488 [2024-05-14 02:03:56.969350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:42.488 [2024-05-14 02:03:56.969360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.860 02:03:58 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:43.860 00:06:43.860 SPDK Configuration: 00:06:43.860 Core mask: 0xf 00:06:43.860 00:06:43.860 Accel Perf Configuration: 00:06:43.860 Workload Type: decompress 00:06:43.860 Transfer size: 111250 bytes 00:06:43.861 Vector count 1 00:06:43.861 Module: software 00:06:43.861 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:43.861 Queue depth: 32 00:06:43.861 Allocate depth: 32 00:06:43.861 # threads/core: 1 00:06:43.861 Run time: 1 seconds 00:06:43.861 Verify: Yes 00:06:43.861 00:06:43.861 Running for 1 seconds... 00:06:43.861 00:06:43.861 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:43.861 ------------------------------------------------------------------------------------ 00:06:43.861 0,0 3808/s 157 MiB/s 0 0 00:06:43.861 3,0 3616/s 149 MiB/s 0 0 00:06:43.861 2,0 3712/s 153 MiB/s 0 0 00:06:43.861 1,0 3168/s 130 MiB/s 0 0 00:06:43.861 ==================================================================================== 00:06:43.861 Total 14304/s 1517 MiB/s 0 0' 00:06:43.861 02:03:58 -- accel/accel.sh@20 -- # IFS=: 00:06:43.861 02:03:58 -- accel/accel.sh@20 -- # read -r var val 00:06:43.861 02:03:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:43.861 02:03:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:43.861 02:03:58 -- accel/accel.sh@12 -- # build_accel_config 00:06:43.861 02:03:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:43.861 02:03:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.861 02:03:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.861 02:03:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:43.861 02:03:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:43.861 02:03:58 -- accel/accel.sh@41 -- # local IFS=, 00:06:43.861 02:03:58 -- accel/accel.sh@42 -- # jq -r . 00:06:43.861 [2024-05-14 02:03:58.202063] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:43.861 [2024-05-14 02:03:58.202154] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59333 ] 00:06:43.861 [2024-05-14 02:03:58.332225] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:43.861 [2024-05-14 02:03:58.401151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.861 [2024-05-14 02:03:58.401241] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:43.861 [2024-05-14 02:03:58.401306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.861 [2024-05-14 02:03:58.401298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:43.861 02:03:58 -- accel/accel.sh@21 -- # val= 00:06:43.861 02:03:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.861 02:03:58 -- accel/accel.sh@20 -- # IFS=: 00:06:43.861 02:03:58 -- accel/accel.sh@20 -- # read -r var val 00:06:43.861 02:03:58 -- accel/accel.sh@21 -- # val= 00:06:43.861 02:03:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.861 02:03:58 -- accel/accel.sh@20 -- # IFS=: 00:06:43.861 02:03:58 -- accel/accel.sh@20 -- # read -r var val 00:06:43.861 02:03:58 -- accel/accel.sh@21 -- # val= 00:06:43.861 02:03:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.861 02:03:58 -- accel/accel.sh@20 -- # IFS=: 00:06:43.861 02:03:58 -- accel/accel.sh@20 -- # read -r var val 00:06:43.861 02:03:58 -- accel/accel.sh@21 -- # val=0xf 00:06:43.861 02:03:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.861 02:03:58 -- accel/accel.sh@20 -- # IFS=: 00:06:43.861 02:03:58 -- accel/accel.sh@20 -- # read -r var val 00:06:43.861 02:03:58 -- accel/accel.sh@21 -- # val= 00:06:43.861 02:03:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.861 02:03:58 -- accel/accel.sh@20 -- # IFS=: 00:06:43.861 02:03:58 -- accel/accel.sh@20 -- # read -r var val 00:06:43.861 02:03:58 -- accel/accel.sh@21 -- # val= 00:06:43.861 02:03:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.861 02:03:58 -- accel/accel.sh@20 -- # IFS=: 00:06:43.861 02:03:58 -- accel/accel.sh@20 -- # read -r var val 00:06:43.861 02:03:58 -- accel/accel.sh@21 -- # val=decompress 00:06:43.861 02:03:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.861 02:03:58 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:43.861 02:03:58 -- accel/accel.sh@20 -- # IFS=: 00:06:43.861 02:03:58 -- accel/accel.sh@20 -- # read -r var val 00:06:43.861 02:03:58 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:43.861 02:03:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.861 02:03:58 -- accel/accel.sh@20 -- # IFS=: 00:06:43.861 02:03:58 -- accel/accel.sh@20 -- # read -r var val 00:06:43.861 02:03:58 -- accel/accel.sh@21 -- # val= 00:06:43.861 02:03:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.861 02:03:58 -- accel/accel.sh@20 -- # IFS=: 00:06:43.861 02:03:58 -- accel/accel.sh@20 -- # read -r var val 00:06:43.861 02:03:58 -- accel/accel.sh@21 -- # val=software 00:06:43.861 02:03:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.861 02:03:58 -- accel/accel.sh@23 -- # accel_module=software 00:06:43.861 02:03:58 -- accel/accel.sh@20 -- # IFS=: 00:06:43.861 02:03:58 -- accel/accel.sh@20 -- # read -r var val 00:06:43.861 02:03:58 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:43.861 02:03:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.861 02:03:58 -- accel/accel.sh@20 -- # IFS=: 00:06:43.861 02:03:58 -- accel/accel.sh@20 -- # read -r var val 00:06:43.861 02:03:58 -- accel/accel.sh@21 -- # val=32 00:06:43.861 02:03:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.861 02:03:58 -- accel/accel.sh@20 -- # IFS=: 00:06:43.861 02:03:58 -- accel/accel.sh@20 -- # read -r var val 00:06:43.861 02:03:58 -- accel/accel.sh@21 -- # val=32 00:06:43.861 02:03:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.861 02:03:58 -- accel/accel.sh@20 -- # IFS=: 00:06:43.861 02:03:58 -- accel/accel.sh@20 -- # read -r var val 00:06:43.861 02:03:58 -- accel/accel.sh@21 -- # val=1 00:06:43.861 02:03:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.861 02:03:58 -- accel/accel.sh@20 -- # IFS=: 00:06:43.861 02:03:58 -- accel/accel.sh@20 -- # read -r var val 00:06:43.861 02:03:58 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:43.861 02:03:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.861 02:03:58 -- accel/accel.sh@20 -- # IFS=: 00:06:43.861 02:03:58 -- accel/accel.sh@20 -- # read -r var val 00:06:43.861 02:03:58 -- accel/accel.sh@21 -- # val=Yes 00:06:43.861 02:03:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.861 02:03:58 -- accel/accel.sh@20 -- # IFS=: 00:06:44.119 02:03:58 -- accel/accel.sh@20 -- # read -r var val 00:06:44.119 02:03:58 -- accel/accel.sh@21 -- # val= 00:06:44.119 02:03:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.119 02:03:58 -- accel/accel.sh@20 -- # IFS=: 00:06:44.119 02:03:58 -- accel/accel.sh@20 -- # read -r var val 00:06:44.119 02:03:58 -- accel/accel.sh@21 -- # val= 00:06:44.119 02:03:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.119 02:03:58 -- accel/accel.sh@20 -- # IFS=: 00:06:44.119 02:03:58 -- accel/accel.sh@20 -- # read -r var val 00:06:45.051 02:03:59 -- accel/accel.sh@21 -- # val= 00:06:45.051 02:03:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.051 02:03:59 -- accel/accel.sh@20 -- # IFS=: 00:06:45.051 02:03:59 -- accel/accel.sh@20 -- # read -r var val 00:06:45.051 02:03:59 -- accel/accel.sh@21 -- # val= 00:06:45.051 02:03:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.051 02:03:59 -- accel/accel.sh@20 -- # IFS=: 00:06:45.051 02:03:59 -- accel/accel.sh@20 -- # read -r var val 00:06:45.051 02:03:59 -- accel/accel.sh@21 -- # val= 00:06:45.051 02:03:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.051 02:03:59 -- accel/accel.sh@20 -- # IFS=: 00:06:45.051 02:03:59 -- accel/accel.sh@20 -- # read -r var val 00:06:45.051 02:03:59 -- accel/accel.sh@21 -- # val= 00:06:45.051 02:03:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.051 02:03:59 -- accel/accel.sh@20 -- # IFS=: 00:06:45.051 02:03:59 -- accel/accel.sh@20 -- # read -r var val 00:06:45.051 02:03:59 -- accel/accel.sh@21 -- # val= 00:06:45.051 02:03:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.051 02:03:59 -- accel/accel.sh@20 -- # IFS=: 00:06:45.051 02:03:59 -- accel/accel.sh@20 -- # read -r var val 00:06:45.051 02:03:59 -- accel/accel.sh@21 -- # val= 00:06:45.051 02:03:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.051 02:03:59 -- accel/accel.sh@20 -- # IFS=: 00:06:45.051 02:03:59 -- accel/accel.sh@20 -- # read -r var val 00:06:45.051 02:03:59 -- accel/accel.sh@21 -- # val= 00:06:45.051 02:03:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.051 02:03:59 -- accel/accel.sh@20 -- # IFS=: 00:06:45.051 02:03:59 -- accel/accel.sh@20 -- # read -r var val 00:06:45.051 02:03:59 -- accel/accel.sh@21 -- # val= 00:06:45.051 02:03:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.051 02:03:59 -- accel/accel.sh@20 -- # IFS=: 00:06:45.051 02:03:59 -- accel/accel.sh@20 -- # read -r var val 00:06:45.051 02:03:59 -- accel/accel.sh@21 -- # val= 00:06:45.051 02:03:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.051 02:03:59 -- accel/accel.sh@20 -- # IFS=: 00:06:45.051 02:03:59 -- accel/accel.sh@20 -- # read -r var val 00:06:45.051 02:03:59 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:45.051 ************************************ 00:06:45.051 END TEST accel_decomp_full_mcore 00:06:45.051 ************************************ 00:06:45.051 02:03:59 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:45.051 02:03:59 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:45.051 00:06:45.051 real 0m2.902s 00:06:45.051 user 0m9.075s 00:06:45.051 sys 0m0.170s 00:06:45.051 02:03:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.051 02:03:59 -- common/autotest_common.sh@10 -- # set +x 00:06:45.309 02:03:59 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:45.309 02:03:59 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:06:45.309 02:03:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:45.309 02:03:59 -- common/autotest_common.sh@10 -- # set +x 00:06:45.309 ************************************ 00:06:45.309 START TEST accel_decomp_mthread 00:06:45.309 ************************************ 00:06:45.309 02:03:59 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:45.309 02:03:59 -- accel/accel.sh@16 -- # local accel_opc 00:06:45.309 02:03:59 -- accel/accel.sh@17 -- # local accel_module 00:06:45.309 02:03:59 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:45.309 02:03:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:45.309 02:03:59 -- accel/accel.sh@12 -- # build_accel_config 00:06:45.309 02:03:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:45.309 02:03:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.309 02:03:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.309 02:03:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:45.309 02:03:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:45.309 02:03:59 -- accel/accel.sh@41 -- # local IFS=, 00:06:45.309 02:03:59 -- accel/accel.sh@42 -- # jq -r . 00:06:45.309 [2024-05-14 02:03:59.669518] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:45.309 [2024-05-14 02:03:59.669612] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59376 ] 00:06:45.309 [2024-05-14 02:03:59.803335] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.309 [2024-05-14 02:03:59.880328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.679 02:04:01 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:46.679 00:06:46.679 SPDK Configuration: 00:06:46.679 Core mask: 0x1 00:06:46.679 00:06:46.679 Accel Perf Configuration: 00:06:46.679 Workload Type: decompress 00:06:46.679 Transfer size: 4096 bytes 00:06:46.679 Vector count 1 00:06:46.679 Module: software 00:06:46.679 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:46.679 Queue depth: 32 00:06:46.679 Allocate depth: 32 00:06:46.679 # threads/core: 2 00:06:46.679 Run time: 1 seconds 00:06:46.679 Verify: Yes 00:06:46.679 00:06:46.679 Running for 1 seconds... 00:06:46.679 00:06:46.679 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:46.679 ------------------------------------------------------------------------------------ 00:06:46.679 0,1 30688/s 56 MiB/s 0 0 00:06:46.679 0,0 30592/s 56 MiB/s 0 0 00:06:46.679 ==================================================================================== 00:06:46.679 Total 61280/s 239 MiB/s 0 0' 00:06:46.679 02:04:01 -- accel/accel.sh@20 -- # IFS=: 00:06:46.679 02:04:01 -- accel/accel.sh@20 -- # read -r var val 00:06:46.679 02:04:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:46.679 02:04:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:46.679 02:04:01 -- accel/accel.sh@12 -- # build_accel_config 00:06:46.679 02:04:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:46.679 02:04:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.679 02:04:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.679 02:04:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:46.679 02:04:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:46.679 02:04:01 -- accel/accel.sh@41 -- # local IFS=, 00:06:46.679 02:04:01 -- accel/accel.sh@42 -- # jq -r . 00:06:46.679 [2024-05-14 02:04:01.087156] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:46.679 [2024-05-14 02:04:01.087932] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59390 ] 00:06:46.679 [2024-05-14 02:04:01.228360] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.973 [2024-05-14 02:04:01.287899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.973 02:04:01 -- accel/accel.sh@21 -- # val= 00:06:46.973 02:04:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.973 02:04:01 -- accel/accel.sh@20 -- # IFS=: 00:06:46.973 02:04:01 -- accel/accel.sh@20 -- # read -r var val 00:06:46.973 02:04:01 -- accel/accel.sh@21 -- # val= 00:06:46.973 02:04:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.973 02:04:01 -- accel/accel.sh@20 -- # IFS=: 00:06:46.973 02:04:01 -- accel/accel.sh@20 -- # read -r var val 00:06:46.973 02:04:01 -- accel/accel.sh@21 -- # val= 00:06:46.973 02:04:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.973 02:04:01 -- accel/accel.sh@20 -- # IFS=: 00:06:46.973 02:04:01 -- accel/accel.sh@20 -- # read -r var val 00:06:46.973 02:04:01 -- accel/accel.sh@21 -- # val=0x1 00:06:46.973 02:04:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.973 02:04:01 -- accel/accel.sh@20 -- # IFS=: 00:06:46.973 02:04:01 -- accel/accel.sh@20 -- # read -r var val 00:06:46.974 02:04:01 -- accel/accel.sh@21 -- # val= 00:06:46.974 02:04:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.974 02:04:01 -- accel/accel.sh@20 -- # IFS=: 00:06:46.974 02:04:01 -- accel/accel.sh@20 -- # read -r var val 00:06:46.974 02:04:01 -- accel/accel.sh@21 -- # val= 00:06:46.974 02:04:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.974 02:04:01 -- accel/accel.sh@20 -- # IFS=: 00:06:46.974 02:04:01 -- accel/accel.sh@20 -- # read -r var val 00:06:46.974 02:04:01 -- accel/accel.sh@21 -- # val=decompress 00:06:46.974 02:04:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.974 02:04:01 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:46.974 02:04:01 -- accel/accel.sh@20 -- # IFS=: 00:06:46.974 02:04:01 -- accel/accel.sh@20 -- # read -r var val 00:06:46.974 02:04:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:46.974 02:04:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.974 02:04:01 -- accel/accel.sh@20 -- # IFS=: 00:06:46.974 02:04:01 -- accel/accel.sh@20 -- # read -r var val 00:06:46.974 02:04:01 -- accel/accel.sh@21 -- # val= 00:06:46.974 02:04:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.974 02:04:01 -- accel/accel.sh@20 -- # IFS=: 00:06:46.974 02:04:01 -- accel/accel.sh@20 -- # read -r var val 00:06:46.974 02:04:01 -- accel/accel.sh@21 -- # val=software 00:06:46.974 02:04:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.974 02:04:01 -- accel/accel.sh@23 -- # accel_module=software 00:06:46.974 02:04:01 -- accel/accel.sh@20 -- # IFS=: 00:06:46.974 02:04:01 -- accel/accel.sh@20 -- # read -r var val 00:06:46.974 02:04:01 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:46.974 02:04:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.974 02:04:01 -- accel/accel.sh@20 -- # IFS=: 00:06:46.974 02:04:01 -- accel/accel.sh@20 -- # read -r var val 00:06:46.974 02:04:01 -- accel/accel.sh@21 -- # val=32 00:06:46.974 02:04:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.974 02:04:01 -- accel/accel.sh@20 -- # IFS=: 00:06:46.974 02:04:01 -- accel/accel.sh@20 -- # read -r var val 00:06:46.974 02:04:01 -- accel/accel.sh@21 -- # val=32 00:06:46.974 02:04:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.974 02:04:01 -- accel/accel.sh@20 -- # IFS=: 00:06:46.974 02:04:01 -- accel/accel.sh@20 -- # read -r var val 00:06:46.974 02:04:01 -- accel/accel.sh@21 -- # val=2 00:06:46.974 02:04:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.974 02:04:01 -- accel/accel.sh@20 -- # IFS=: 00:06:46.974 02:04:01 -- accel/accel.sh@20 -- # read -r var val 00:06:46.974 02:04:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:46.974 02:04:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.974 02:04:01 -- accel/accel.sh@20 -- # IFS=: 00:06:46.974 02:04:01 -- accel/accel.sh@20 -- # read -r var val 00:06:46.974 02:04:01 -- accel/accel.sh@21 -- # val=Yes 00:06:46.974 02:04:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.974 02:04:01 -- accel/accel.sh@20 -- # IFS=: 00:06:46.974 02:04:01 -- accel/accel.sh@20 -- # read -r var val 00:06:46.974 02:04:01 -- accel/accel.sh@21 -- # val= 00:06:46.974 02:04:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.974 02:04:01 -- accel/accel.sh@20 -- # IFS=: 00:06:46.974 02:04:01 -- accel/accel.sh@20 -- # read -r var val 00:06:46.974 02:04:01 -- accel/accel.sh@21 -- # val= 00:06:46.974 02:04:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.974 02:04:01 -- accel/accel.sh@20 -- # IFS=: 00:06:46.974 02:04:01 -- accel/accel.sh@20 -- # read -r var val 00:06:47.906 02:04:02 -- accel/accel.sh@21 -- # val= 00:06:47.906 02:04:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.906 02:04:02 -- accel/accel.sh@20 -- # IFS=: 00:06:47.906 02:04:02 -- accel/accel.sh@20 -- # read -r var val 00:06:47.906 02:04:02 -- accel/accel.sh@21 -- # val= 00:06:47.906 02:04:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.906 02:04:02 -- accel/accel.sh@20 -- # IFS=: 00:06:47.906 02:04:02 -- accel/accel.sh@20 -- # read -r var val 00:06:47.906 02:04:02 -- accel/accel.sh@21 -- # val= 00:06:47.906 02:04:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.906 02:04:02 -- accel/accel.sh@20 -- # IFS=: 00:06:47.906 02:04:02 -- accel/accel.sh@20 -- # read -r var val 00:06:47.906 02:04:02 -- accel/accel.sh@21 -- # val= 00:06:47.906 02:04:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.906 02:04:02 -- accel/accel.sh@20 -- # IFS=: 00:06:47.906 02:04:02 -- accel/accel.sh@20 -- # read -r var val 00:06:47.906 02:04:02 -- accel/accel.sh@21 -- # val= 00:06:47.906 02:04:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.906 02:04:02 -- accel/accel.sh@20 -- # IFS=: 00:06:47.906 02:04:02 -- accel/accel.sh@20 -- # read -r var val 00:06:47.906 02:04:02 -- accel/accel.sh@21 -- # val= 00:06:47.906 02:04:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.906 02:04:02 -- accel/accel.sh@20 -- # IFS=: 00:06:47.906 02:04:02 -- accel/accel.sh@20 -- # read -r var val 00:06:47.906 02:04:02 -- accel/accel.sh@21 -- # val= 00:06:47.906 02:04:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.906 02:04:02 -- accel/accel.sh@20 -- # IFS=: 00:06:47.906 02:04:02 -- accel/accel.sh@20 -- # read -r var val 00:06:47.906 02:04:02 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:47.906 02:04:02 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:47.906 02:04:02 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:47.906 00:06:47.906 real 0m2.825s 00:06:47.906 user 0m2.467s 00:06:47.906 sys 0m0.146s 00:06:47.906 02:04:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.906 02:04:02 -- common/autotest_common.sh@10 -- # set +x 00:06:47.906 ************************************ 00:06:47.906 END TEST accel_decomp_mthread 00:06:47.906 ************************************ 00:06:48.164 02:04:02 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:48.164 02:04:02 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:06:48.164 02:04:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:48.164 02:04:02 -- common/autotest_common.sh@10 -- # set +x 00:06:48.164 ************************************ 00:06:48.164 START TEST accel_deomp_full_mthread 00:06:48.164 ************************************ 00:06:48.164 02:04:02 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:48.164 02:04:02 -- accel/accel.sh@16 -- # local accel_opc 00:06:48.164 02:04:02 -- accel/accel.sh@17 -- # local accel_module 00:06:48.164 02:04:02 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:48.164 02:04:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:48.164 02:04:02 -- accel/accel.sh@12 -- # build_accel_config 00:06:48.164 02:04:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:48.164 02:04:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.164 02:04:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.164 02:04:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:48.164 02:04:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:48.164 02:04:02 -- accel/accel.sh@41 -- # local IFS=, 00:06:48.164 02:04:02 -- accel/accel.sh@42 -- # jq -r . 00:06:48.164 [2024-05-14 02:04:02.533988] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:48.164 [2024-05-14 02:04:02.534649] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59429 ] 00:06:48.164 [2024-05-14 02:04:02.672819] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.422 [2024-05-14 02:04:02.758064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.796 02:04:03 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:49.797 00:06:49.797 SPDK Configuration: 00:06:49.797 Core mask: 0x1 00:06:49.797 00:06:49.797 Accel Perf Configuration: 00:06:49.797 Workload Type: decompress 00:06:49.797 Transfer size: 111250 bytes 00:06:49.797 Vector count 1 00:06:49.797 Module: software 00:06:49.797 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:49.797 Queue depth: 32 00:06:49.797 Allocate depth: 32 00:06:49.797 # threads/core: 2 00:06:49.797 Run time: 1 seconds 00:06:49.797 Verify: Yes 00:06:49.797 00:06:49.797 Running for 1 seconds... 00:06:49.797 00:06:49.797 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:49.797 ------------------------------------------------------------------------------------ 00:06:49.797 0,1 2048/s 84 MiB/s 0 0 00:06:49.797 0,0 2016/s 83 MiB/s 0 0 00:06:49.797 ==================================================================================== 00:06:49.797 Total 4064/s 431 MiB/s 0 0' 00:06:49.797 02:04:03 -- accel/accel.sh@20 -- # IFS=: 00:06:49.797 02:04:03 -- accel/accel.sh@20 -- # read -r var val 00:06:49.797 02:04:03 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:49.797 02:04:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:49.797 02:04:03 -- accel/accel.sh@12 -- # build_accel_config 00:06:49.797 02:04:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:49.797 02:04:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.797 02:04:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.797 02:04:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:49.797 02:04:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:49.797 02:04:03 -- accel/accel.sh@41 -- # local IFS=, 00:06:49.797 02:04:03 -- accel/accel.sh@42 -- # jq -r . 00:06:49.797 [2024-05-14 02:04:04.006087] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:49.797 [2024-05-14 02:04:04.006192] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59445 ] 00:06:49.797 [2024-05-14 02:04:04.143962] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.797 [2024-05-14 02:04:04.218846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.797 02:04:04 -- accel/accel.sh@21 -- # val= 00:06:49.797 02:04:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.797 02:04:04 -- accel/accel.sh@20 -- # IFS=: 00:06:49.797 02:04:04 -- accel/accel.sh@20 -- # read -r var val 00:06:49.797 02:04:04 -- accel/accel.sh@21 -- # val= 00:06:49.797 02:04:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.797 02:04:04 -- accel/accel.sh@20 -- # IFS=: 00:06:49.797 02:04:04 -- accel/accel.sh@20 -- # read -r var val 00:06:49.797 02:04:04 -- accel/accel.sh@21 -- # val= 00:06:49.797 02:04:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.797 02:04:04 -- accel/accel.sh@20 -- # IFS=: 00:06:49.797 02:04:04 -- accel/accel.sh@20 -- # read -r var val 00:06:49.797 02:04:04 -- accel/accel.sh@21 -- # val=0x1 00:06:49.797 02:04:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.797 02:04:04 -- accel/accel.sh@20 -- # IFS=: 00:06:49.797 02:04:04 -- accel/accel.sh@20 -- # read -r var val 00:06:49.797 02:04:04 -- accel/accel.sh@21 -- # val= 00:06:49.797 02:04:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.797 02:04:04 -- accel/accel.sh@20 -- # IFS=: 00:06:49.797 02:04:04 -- accel/accel.sh@20 -- # read -r var val 00:06:49.797 02:04:04 -- accel/accel.sh@21 -- # val= 00:06:49.797 02:04:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.797 02:04:04 -- accel/accel.sh@20 -- # IFS=: 00:06:49.797 02:04:04 -- accel/accel.sh@20 -- # read -r var val 00:06:49.797 02:04:04 -- accel/accel.sh@21 -- # val=decompress 00:06:49.797 02:04:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.797 02:04:04 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:49.797 02:04:04 -- accel/accel.sh@20 -- # IFS=: 00:06:49.797 02:04:04 -- accel/accel.sh@20 -- # read -r var val 00:06:49.797 02:04:04 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:49.797 02:04:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.797 02:04:04 -- accel/accel.sh@20 -- # IFS=: 00:06:49.797 02:04:04 -- accel/accel.sh@20 -- # read -r var val 00:06:49.797 02:04:04 -- accel/accel.sh@21 -- # val= 00:06:49.797 02:04:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.797 02:04:04 -- accel/accel.sh@20 -- # IFS=: 00:06:49.797 02:04:04 -- accel/accel.sh@20 -- # read -r var val 00:06:49.797 02:04:04 -- accel/accel.sh@21 -- # val=software 00:06:49.797 02:04:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.797 02:04:04 -- accel/accel.sh@23 -- # accel_module=software 00:06:49.797 02:04:04 -- accel/accel.sh@20 -- # IFS=: 00:06:49.797 02:04:04 -- accel/accel.sh@20 -- # read -r var val 00:06:49.797 02:04:04 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:49.797 02:04:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.797 02:04:04 -- accel/accel.sh@20 -- # IFS=: 00:06:49.797 02:04:04 -- accel/accel.sh@20 -- # read -r var val 00:06:49.797 02:04:04 -- accel/accel.sh@21 -- # val=32 00:06:49.797 02:04:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.797 02:04:04 -- accel/accel.sh@20 -- # IFS=: 00:06:49.797 02:04:04 -- accel/accel.sh@20 -- # read -r var val 00:06:49.797 02:04:04 -- accel/accel.sh@21 -- # val=32 00:06:49.797 02:04:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.797 02:04:04 -- accel/accel.sh@20 -- # IFS=: 00:06:49.797 02:04:04 -- accel/accel.sh@20 -- # read -r var val 00:06:49.797 02:04:04 -- accel/accel.sh@21 -- # val=2 00:06:49.797 02:04:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.797 02:04:04 -- accel/accel.sh@20 -- # IFS=: 00:06:49.797 02:04:04 -- accel/accel.sh@20 -- # read -r var val 00:06:49.797 02:04:04 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:49.797 02:04:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.797 02:04:04 -- accel/accel.sh@20 -- # IFS=: 00:06:49.797 02:04:04 -- accel/accel.sh@20 -- # read -r var val 00:06:49.797 02:04:04 -- accel/accel.sh@21 -- # val=Yes 00:06:49.797 02:04:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.797 02:04:04 -- accel/accel.sh@20 -- # IFS=: 00:06:49.797 02:04:04 -- accel/accel.sh@20 -- # read -r var val 00:06:49.797 02:04:04 -- accel/accel.sh@21 -- # val= 00:06:49.797 02:04:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.797 02:04:04 -- accel/accel.sh@20 -- # IFS=: 00:06:49.797 02:04:04 -- accel/accel.sh@20 -- # read -r var val 00:06:49.797 02:04:04 -- accel/accel.sh@21 -- # val= 00:06:49.797 02:04:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.797 02:04:04 -- accel/accel.sh@20 -- # IFS=: 00:06:49.797 02:04:04 -- accel/accel.sh@20 -- # read -r var val 00:06:51.170 02:04:05 -- accel/accel.sh@21 -- # val= 00:06:51.170 02:04:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.170 02:04:05 -- accel/accel.sh@20 -- # IFS=: 00:06:51.170 02:04:05 -- accel/accel.sh@20 -- # read -r var val 00:06:51.170 02:04:05 -- accel/accel.sh@21 -- # val= 00:06:51.170 02:04:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.170 02:04:05 -- accel/accel.sh@20 -- # IFS=: 00:06:51.170 02:04:05 -- accel/accel.sh@20 -- # read -r var val 00:06:51.170 02:04:05 -- accel/accel.sh@21 -- # val= 00:06:51.170 02:04:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.170 02:04:05 -- accel/accel.sh@20 -- # IFS=: 00:06:51.170 02:04:05 -- accel/accel.sh@20 -- # read -r var val 00:06:51.170 02:04:05 -- accel/accel.sh@21 -- # val= 00:06:51.170 02:04:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.170 02:04:05 -- accel/accel.sh@20 -- # IFS=: 00:06:51.170 02:04:05 -- accel/accel.sh@20 -- # read -r var val 00:06:51.170 02:04:05 -- accel/accel.sh@21 -- # val= 00:06:51.170 02:04:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.170 02:04:05 -- accel/accel.sh@20 -- # IFS=: 00:06:51.170 02:04:05 -- accel/accel.sh@20 -- # read -r var val 00:06:51.170 02:04:05 -- accel/accel.sh@21 -- # val= 00:06:51.170 02:04:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.170 02:04:05 -- accel/accel.sh@20 -- # IFS=: 00:06:51.170 02:04:05 -- accel/accel.sh@20 -- # read -r var val 00:06:51.170 02:04:05 -- accel/accel.sh@21 -- # val= 00:06:51.170 02:04:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.170 02:04:05 -- accel/accel.sh@20 -- # IFS=: 00:06:51.170 02:04:05 -- accel/accel.sh@20 -- # read -r var val 00:06:51.170 02:04:05 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:51.170 02:04:05 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:51.170 02:04:05 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:51.170 00:06:51.170 real 0m2.929s 00:06:51.170 user 0m2.560s 00:06:51.170 sys 0m0.155s 00:06:51.170 02:04:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.170 02:04:05 -- common/autotest_common.sh@10 -- # set +x 00:06:51.170 ************************************ 00:06:51.170 END TEST accel_deomp_full_mthread 00:06:51.170 ************************************ 00:06:51.170 02:04:05 -- accel/accel.sh@116 -- # [[ n == y ]] 00:06:51.170 02:04:05 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:51.170 02:04:05 -- accel/accel.sh@129 -- # build_accel_config 00:06:51.170 02:04:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:51.170 02:04:05 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:51.170 02:04:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.170 02:04:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:51.170 02:04:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.170 02:04:05 -- common/autotest_common.sh@10 -- # set +x 00:06:51.170 02:04:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:51.170 02:04:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:51.170 02:04:05 -- accel/accel.sh@41 -- # local IFS=, 00:06:51.170 02:04:05 -- accel/accel.sh@42 -- # jq -r . 00:06:51.170 ************************************ 00:06:51.170 START TEST accel_dif_functional_tests 00:06:51.170 ************************************ 00:06:51.170 02:04:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:51.170 [2024-05-14 02:04:05.537033] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:51.170 [2024-05-14 02:04:05.537171] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59475 ] 00:06:51.171 [2024-05-14 02:04:05.675303] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:51.428 [2024-05-14 02:04:05.772818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.428 [2024-05-14 02:04:05.772902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:51.428 [2024-05-14 02:04:05.772926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.428 00:06:51.428 00:06:51.428 CUnit - A unit testing framework for C - Version 2.1-3 00:06:51.428 http://cunit.sourceforge.net/ 00:06:51.428 00:06:51.428 00:06:51.428 Suite: accel_dif 00:06:51.428 Test: verify: DIF generated, GUARD check ...passed 00:06:51.428 Test: verify: DIF generated, APPTAG check ...passed 00:06:51.428 Test: verify: DIF generated, REFTAG check ...passed 00:06:51.428 Test: verify: DIF not generated, GUARD check ...[2024-05-14 02:04:05.844788] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:51.428 [2024-05-14 02:04:05.845016] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:51.428 passed 00:06:51.428 Test: verify: DIF not generated, APPTAG check ...[2024-05-14 02:04:05.845191] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:51.428 passed 00:06:51.428 Test: verify: DIF not generated, REFTAG check ...[2024-05-14 02:04:05.845390] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:51.428 [2024-05-14 02:04:05.845587] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:51.428 passed 00:06:51.428 Test: verify: APPTAG correct, APPTAG check ...[2024-05-14 02:04:05.845710] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:51.428 passed 00:06:51.428 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:06:51.428 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:51.428 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:51.428 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:51.428 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:06:51.428 Test: generate copy: DIF generated, GUARD check ...passed 00:06:51.428 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:51.428 Test: generate copy: DIF generated, REFTAG check ...[2024-05-14 02:04:05.846015] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:51.428 [2024-05-14 02:04:05.846402] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:51.428 passed 00:06:51.428 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:51.428 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:51.428 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:51.428 Test: generate copy: iovecs-len validate ...[2024-05-14 02:04:05.847318] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned passed 00:06:51.428 Test: generate copy: buffer alignment validate ...passed 00:06:51.428 00:06:51.428 with block_size. 00:06:51.428 Run Summary: Type Total Ran Passed Failed Inactive 00:06:51.428 suites 1 1 n/a 0 0 00:06:51.428 tests 20 20 20 0 0 00:06:51.428 asserts 204 204 204 0 n/a 00:06:51.428 00:06:51.428 Elapsed time = 0.005 seconds 00:06:51.686 00:06:51.686 real 0m0.602s 00:06:51.686 user 0m0.731s 00:06:51.686 sys 0m0.129s 00:06:51.686 02:04:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.686 ************************************ 00:06:51.686 END TEST accel_dif_functional_tests 00:06:51.686 02:04:06 -- common/autotest_common.sh@10 -- # set +x 00:06:51.686 ************************************ 00:06:51.686 00:06:51.686 real 1m0.478s 00:06:51.686 user 1m5.727s 00:06:51.686 sys 0m4.290s 00:06:51.686 02:04:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.686 02:04:06 -- common/autotest_common.sh@10 -- # set +x 00:06:51.686 ************************************ 00:06:51.686 END TEST accel 00:06:51.686 ************************************ 00:06:51.686 02:04:06 -- spdk/autotest.sh@190 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:51.686 02:04:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:51.686 02:04:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:51.686 02:04:06 -- common/autotest_common.sh@10 -- # set +x 00:06:51.686 ************************************ 00:06:51.686 START TEST accel_rpc 00:06:51.686 ************************************ 00:06:51.686 02:04:06 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:51.686 * Looking for test storage... 00:06:51.686 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:51.686 02:04:06 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:51.686 02:04:06 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=59544 00:06:51.686 02:04:06 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:51.686 02:04:06 -- accel/accel_rpc.sh@15 -- # waitforlisten 59544 00:06:51.686 02:04:06 -- common/autotest_common.sh@819 -- # '[' -z 59544 ']' 00:06:51.686 02:04:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.686 02:04:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:51.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.686 02:04:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.686 02:04:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:51.686 02:04:06 -- common/autotest_common.sh@10 -- # set +x 00:06:51.943 [2024-05-14 02:04:06.297626] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:51.943 [2024-05-14 02:04:06.297758] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59544 ] 00:06:51.943 [2024-05-14 02:04:06.440127] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.943 [2024-05-14 02:04:06.499476] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:51.943 [2024-05-14 02:04:06.499633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.877 02:04:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:52.877 02:04:07 -- common/autotest_common.sh@852 -- # return 0 00:06:52.878 02:04:07 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:52.878 02:04:07 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:52.878 02:04:07 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:52.878 02:04:07 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:52.878 02:04:07 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:52.878 02:04:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:52.878 02:04:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:52.878 02:04:07 -- common/autotest_common.sh@10 -- # set +x 00:06:52.878 ************************************ 00:06:52.878 START TEST accel_assign_opcode 00:06:52.878 ************************************ 00:06:52.878 02:04:07 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:06:52.878 02:04:07 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:52.878 02:04:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:52.878 02:04:07 -- common/autotest_common.sh@10 -- # set +x 00:06:52.878 [2024-05-14 02:04:07.240102] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:52.878 02:04:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:52.878 02:04:07 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:52.878 02:04:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:52.878 02:04:07 -- common/autotest_common.sh@10 -- # set +x 00:06:52.878 [2024-05-14 02:04:07.248118] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:52.878 02:04:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:52.878 02:04:07 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:52.878 02:04:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:52.878 02:04:07 -- common/autotest_common.sh@10 -- # set +x 00:06:52.878 02:04:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:52.878 02:04:07 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:52.878 02:04:07 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:52.878 02:04:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:52.878 02:04:07 -- common/autotest_common.sh@10 -- # set +x 00:06:52.878 02:04:07 -- accel/accel_rpc.sh@42 -- # grep software 00:06:52.878 02:04:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:53.136 software 00:06:53.136 00:06:53.136 real 0m0.243s 00:06:53.136 user 0m0.064s 00:06:53.136 sys 0m0.005s 00:06:53.136 02:04:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.136 02:04:07 -- common/autotest_common.sh@10 -- # set +x 00:06:53.136 ************************************ 00:06:53.136 END TEST accel_assign_opcode 00:06:53.136 ************************************ 00:06:53.136 02:04:07 -- accel/accel_rpc.sh@55 -- # killprocess 59544 00:06:53.136 02:04:07 -- common/autotest_common.sh@926 -- # '[' -z 59544 ']' 00:06:53.136 02:04:07 -- common/autotest_common.sh@930 -- # kill -0 59544 00:06:53.136 02:04:07 -- common/autotest_common.sh@931 -- # uname 00:06:53.136 02:04:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:53.136 02:04:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 59544 00:06:53.136 02:04:07 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:53.136 02:04:07 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:53.136 02:04:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 59544' 00:06:53.136 killing process with pid 59544 00:06:53.136 02:04:07 -- common/autotest_common.sh@945 -- # kill 59544 00:06:53.136 02:04:07 -- common/autotest_common.sh@950 -- # wait 59544 00:06:53.412 00:06:53.412 real 0m1.650s 00:06:53.412 user 0m1.823s 00:06:53.412 sys 0m0.332s 00:06:53.412 02:04:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.412 02:04:07 -- common/autotest_common.sh@10 -- # set +x 00:06:53.412 ************************************ 00:06:53.412 END TEST accel_rpc 00:06:53.412 ************************************ 00:06:53.412 02:04:07 -- spdk/autotest.sh@191 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:53.412 02:04:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:53.412 02:04:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:53.412 02:04:07 -- common/autotest_common.sh@10 -- # set +x 00:06:53.412 ************************************ 00:06:53.412 START TEST app_cmdline 00:06:53.412 ************************************ 00:06:53.412 02:04:07 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:53.412 * Looking for test storage... 00:06:53.412 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:53.412 02:04:07 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:53.412 02:04:07 -- app/cmdline.sh@17 -- # spdk_tgt_pid=59654 00:06:53.412 02:04:07 -- app/cmdline.sh@18 -- # waitforlisten 59654 00:06:53.412 02:04:07 -- common/autotest_common.sh@819 -- # '[' -z 59654 ']' 00:06:53.412 02:04:07 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:53.412 02:04:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.412 02:04:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:53.412 02:04:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.412 02:04:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:53.412 02:04:07 -- common/autotest_common.sh@10 -- # set +x 00:06:53.412 [2024-05-14 02:04:07.974134] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:53.412 [2024-05-14 02:04:07.974226] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59654 ] 00:06:53.690 [2024-05-14 02:04:08.105815] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.690 [2024-05-14 02:04:08.189407] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:53.690 [2024-05-14 02:04:08.189637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.623 02:04:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:54.623 02:04:08 -- common/autotest_common.sh@852 -- # return 0 00:06:54.623 02:04:08 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:54.881 { 00:06:54.881 "fields": { 00:06:54.881 "commit": "36faa8c31", 00:06:54.881 "major": 24, 00:06:54.881 "minor": 1, 00:06:54.881 "patch": 1, 00:06:54.881 "suffix": "-pre" 00:06:54.881 }, 00:06:54.881 "version": "SPDK v24.01.1-pre git sha1 36faa8c31" 00:06:54.881 } 00:06:54.881 02:04:09 -- app/cmdline.sh@22 -- # expected_methods=() 00:06:54.881 02:04:09 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:54.881 02:04:09 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:54.881 02:04:09 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:54.881 02:04:09 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:54.881 02:04:09 -- app/cmdline.sh@26 -- # sort 00:06:54.881 02:04:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:54.881 02:04:09 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:54.881 02:04:09 -- common/autotest_common.sh@10 -- # set +x 00:06:54.881 02:04:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:54.881 02:04:09 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:54.881 02:04:09 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:54.881 02:04:09 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:54.881 02:04:09 -- common/autotest_common.sh@640 -- # local es=0 00:06:54.881 02:04:09 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:54.881 02:04:09 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:54.881 02:04:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:54.881 02:04:09 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:54.881 02:04:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:54.881 02:04:09 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:54.881 02:04:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:54.881 02:04:09 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:54.881 02:04:09 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:54.881 02:04:09 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:55.141 2024/05/14 02:04:09 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:06:55.141 request: 00:06:55.141 { 00:06:55.141 "method": "env_dpdk_get_mem_stats", 00:06:55.141 "params": {} 00:06:55.141 } 00:06:55.141 Got JSON-RPC error response 00:06:55.141 GoRPCClient: error on JSON-RPC call 00:06:55.141 02:04:09 -- common/autotest_common.sh@643 -- # es=1 00:06:55.141 02:04:09 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:55.141 02:04:09 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:55.141 02:04:09 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:55.141 02:04:09 -- app/cmdline.sh@1 -- # killprocess 59654 00:06:55.141 02:04:09 -- common/autotest_common.sh@926 -- # '[' -z 59654 ']' 00:06:55.141 02:04:09 -- common/autotest_common.sh@930 -- # kill -0 59654 00:06:55.141 02:04:09 -- common/autotest_common.sh@931 -- # uname 00:06:55.141 02:04:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:55.141 02:04:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 59654 00:06:55.141 killing process with pid 59654 00:06:55.141 02:04:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:55.141 02:04:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:55.141 02:04:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 59654' 00:06:55.141 02:04:09 -- common/autotest_common.sh@945 -- # kill 59654 00:06:55.141 02:04:09 -- common/autotest_common.sh@950 -- # wait 59654 00:06:55.399 00:06:55.399 real 0m2.051s 00:06:55.399 user 0m2.731s 00:06:55.399 sys 0m0.371s 00:06:55.399 ************************************ 00:06:55.399 END TEST app_cmdline 00:06:55.399 ************************************ 00:06:55.399 02:04:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.399 02:04:09 -- common/autotest_common.sh@10 -- # set +x 00:06:55.399 02:04:09 -- spdk/autotest.sh@192 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:55.399 02:04:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:55.399 02:04:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:55.399 02:04:09 -- common/autotest_common.sh@10 -- # set +x 00:06:55.399 ************************************ 00:06:55.399 START TEST version 00:06:55.399 ************************************ 00:06:55.399 02:04:09 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:55.658 * Looking for test storage... 00:06:55.658 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:55.658 02:04:10 -- app/version.sh@17 -- # get_header_version major 00:06:55.658 02:04:10 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:55.658 02:04:10 -- app/version.sh@14 -- # cut -f2 00:06:55.658 02:04:10 -- app/version.sh@14 -- # tr -d '"' 00:06:55.658 02:04:10 -- app/version.sh@17 -- # major=24 00:06:55.658 02:04:10 -- app/version.sh@18 -- # get_header_version minor 00:06:55.658 02:04:10 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:55.658 02:04:10 -- app/version.sh@14 -- # cut -f2 00:06:55.658 02:04:10 -- app/version.sh@14 -- # tr -d '"' 00:06:55.658 02:04:10 -- app/version.sh@18 -- # minor=1 00:06:55.658 02:04:10 -- app/version.sh@19 -- # get_header_version patch 00:06:55.658 02:04:10 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:55.658 02:04:10 -- app/version.sh@14 -- # cut -f2 00:06:55.658 02:04:10 -- app/version.sh@14 -- # tr -d '"' 00:06:55.658 02:04:10 -- app/version.sh@19 -- # patch=1 00:06:55.658 02:04:10 -- app/version.sh@20 -- # get_header_version suffix 00:06:55.658 02:04:10 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:55.658 02:04:10 -- app/version.sh@14 -- # cut -f2 00:06:55.658 02:04:10 -- app/version.sh@14 -- # tr -d '"' 00:06:55.658 02:04:10 -- app/version.sh@20 -- # suffix=-pre 00:06:55.658 02:04:10 -- app/version.sh@22 -- # version=24.1 00:06:55.658 02:04:10 -- app/version.sh@25 -- # (( patch != 0 )) 00:06:55.658 02:04:10 -- app/version.sh@25 -- # version=24.1.1 00:06:55.658 02:04:10 -- app/version.sh@28 -- # version=24.1.1rc0 00:06:55.658 02:04:10 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:55.658 02:04:10 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:55.658 02:04:10 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:06:55.658 02:04:10 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:06:55.658 00:06:55.658 real 0m0.124s 00:06:55.658 user 0m0.066s 00:06:55.658 sys 0m0.082s 00:06:55.658 ************************************ 00:06:55.658 END TEST version 00:06:55.658 ************************************ 00:06:55.658 02:04:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.658 02:04:10 -- common/autotest_common.sh@10 -- # set +x 00:06:55.658 02:04:10 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:06:55.658 02:04:10 -- spdk/autotest.sh@204 -- # uname -s 00:06:55.658 02:04:10 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:06:55.658 02:04:10 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:06:55.658 02:04:10 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:06:55.659 02:04:10 -- spdk/autotest.sh@217 -- # '[' 0 -eq 1 ']' 00:06:55.659 02:04:10 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:06:55.659 02:04:10 -- spdk/autotest.sh@268 -- # timing_exit lib 00:06:55.659 02:04:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:55.659 02:04:10 -- common/autotest_common.sh@10 -- # set +x 00:06:55.659 02:04:10 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:55.659 02:04:10 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:06:55.659 02:04:10 -- spdk/autotest.sh@287 -- # '[' 1 -eq 1 ']' 00:06:55.659 02:04:10 -- spdk/autotest.sh@288 -- # export NET_TYPE 00:06:55.659 02:04:10 -- spdk/autotest.sh@291 -- # '[' tcp = rdma ']' 00:06:55.659 02:04:10 -- spdk/autotest.sh@294 -- # '[' tcp = tcp ']' 00:06:55.659 02:04:10 -- spdk/autotest.sh@295 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:55.659 02:04:10 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:55.659 02:04:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:55.659 02:04:10 -- common/autotest_common.sh@10 -- # set +x 00:06:55.659 ************************************ 00:06:55.659 START TEST nvmf_tcp 00:06:55.659 ************************************ 00:06:55.659 02:04:10 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:55.659 * Looking for test storage... 00:06:55.659 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:06:55.659 02:04:10 -- nvmf/nvmf.sh@10 -- # uname -s 00:06:55.659 02:04:10 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:55.659 02:04:10 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:55.659 02:04:10 -- nvmf/common.sh@7 -- # uname -s 00:06:55.659 02:04:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:55.659 02:04:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:55.659 02:04:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:55.659 02:04:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:55.659 02:04:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:55.659 02:04:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:55.659 02:04:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:55.659 02:04:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:55.659 02:04:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:55.659 02:04:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:55.659 02:04:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:06:55.659 02:04:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:06:55.659 02:04:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:55.659 02:04:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:55.659 02:04:10 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:55.659 02:04:10 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:55.659 02:04:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:55.659 02:04:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:55.659 02:04:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:55.659 02:04:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.659 02:04:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.659 02:04:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.659 02:04:10 -- paths/export.sh@5 -- # export PATH 00:06:55.659 02:04:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.659 02:04:10 -- nvmf/common.sh@46 -- # : 0 00:06:55.659 02:04:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:55.659 02:04:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:55.659 02:04:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:55.659 02:04:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:55.659 02:04:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:55.659 02:04:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:55.659 02:04:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:55.659 02:04:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:55.659 02:04:10 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:55.659 02:04:10 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:55.659 02:04:10 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:55.659 02:04:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:55.659 02:04:10 -- common/autotest_common.sh@10 -- # set +x 00:06:55.659 02:04:10 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:55.659 02:04:10 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:55.659 02:04:10 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:55.659 02:04:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:55.659 02:04:10 -- common/autotest_common.sh@10 -- # set +x 00:06:55.917 ************************************ 00:06:55.917 START TEST nvmf_example 00:06:55.917 ************************************ 00:06:55.917 02:04:10 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:55.917 * Looking for test storage... 00:06:55.917 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:55.917 02:04:10 -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:55.917 02:04:10 -- nvmf/common.sh@7 -- # uname -s 00:06:55.917 02:04:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:55.917 02:04:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:55.917 02:04:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:55.917 02:04:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:55.917 02:04:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:55.918 02:04:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:55.918 02:04:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:55.918 02:04:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:55.918 02:04:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:55.918 02:04:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:55.918 02:04:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:06:55.918 02:04:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:06:55.918 02:04:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:55.918 02:04:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:55.918 02:04:10 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:55.918 02:04:10 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:55.918 02:04:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:55.918 02:04:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:55.918 02:04:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:55.918 02:04:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.918 02:04:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.918 02:04:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.918 02:04:10 -- paths/export.sh@5 -- # export PATH 00:06:55.918 02:04:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.918 02:04:10 -- nvmf/common.sh@46 -- # : 0 00:06:55.918 02:04:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:55.918 02:04:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:55.918 02:04:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:55.918 02:04:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:55.918 02:04:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:55.918 02:04:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:55.918 02:04:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:55.918 02:04:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:55.918 02:04:10 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:55.918 02:04:10 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:55.918 02:04:10 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:55.918 02:04:10 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:55.918 02:04:10 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:55.918 02:04:10 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:55.918 02:04:10 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:55.918 02:04:10 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:55.918 02:04:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:55.918 02:04:10 -- common/autotest_common.sh@10 -- # set +x 00:06:55.918 02:04:10 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:55.918 02:04:10 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:06:55.918 02:04:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:55.918 02:04:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:06:55.918 02:04:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:06:55.918 02:04:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:06:55.918 02:04:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:55.918 02:04:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:55.918 02:04:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:55.918 02:04:10 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:06:55.918 02:04:10 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:06:55.918 02:04:10 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:06:55.918 02:04:10 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:06:55.918 02:04:10 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:06:55.918 02:04:10 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:06:55.918 02:04:10 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:55.918 02:04:10 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:55.918 02:04:10 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:06:55.918 02:04:10 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:06:55.918 02:04:10 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:55.918 02:04:10 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:55.918 02:04:10 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:55.918 02:04:10 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:55.918 02:04:10 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:55.918 02:04:10 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:55.918 02:04:10 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:55.918 02:04:10 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:55.918 02:04:10 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:06:55.918 Cannot find device "nvmf_init_br" 00:06:55.918 02:04:10 -- nvmf/common.sh@153 -- # true 00:06:55.918 02:04:10 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:06:55.918 Cannot find device "nvmf_tgt_br" 00:06:55.918 02:04:10 -- nvmf/common.sh@154 -- # true 00:06:55.918 02:04:10 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:06:55.918 Cannot find device "nvmf_tgt_br2" 00:06:55.918 02:04:10 -- nvmf/common.sh@155 -- # true 00:06:55.918 02:04:10 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:06:55.918 Cannot find device "nvmf_init_br" 00:06:55.918 02:04:10 -- nvmf/common.sh@156 -- # true 00:06:55.918 02:04:10 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:06:55.918 Cannot find device "nvmf_tgt_br" 00:06:55.918 02:04:10 -- nvmf/common.sh@157 -- # true 00:06:55.918 02:04:10 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:06:55.918 Cannot find device "nvmf_tgt_br2" 00:06:55.918 02:04:10 -- nvmf/common.sh@158 -- # true 00:06:55.918 02:04:10 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:06:55.918 Cannot find device "nvmf_br" 00:06:55.918 02:04:10 -- nvmf/common.sh@159 -- # true 00:06:55.918 02:04:10 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:06:55.918 Cannot find device "nvmf_init_if" 00:06:55.918 02:04:10 -- nvmf/common.sh@160 -- # true 00:06:55.918 02:04:10 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:55.918 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:55.918 02:04:10 -- nvmf/common.sh@161 -- # true 00:06:55.918 02:04:10 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:55.918 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:55.918 02:04:10 -- nvmf/common.sh@162 -- # true 00:06:55.918 02:04:10 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:06:55.918 02:04:10 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:55.918 02:04:10 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:55.918 02:04:10 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:55.918 02:04:10 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:55.918 02:04:10 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:56.176 02:04:10 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:56.176 02:04:10 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:06:56.176 02:04:10 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:06:56.176 02:04:10 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:06:56.176 02:04:10 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:06:56.176 02:04:10 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:06:56.176 02:04:10 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:06:56.176 02:04:10 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:56.176 02:04:10 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:56.177 02:04:10 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:56.177 02:04:10 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:06:56.177 02:04:10 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:06:56.177 02:04:10 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:06:56.177 02:04:10 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:56.177 02:04:10 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:56.177 02:04:10 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:56.177 02:04:10 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:56.177 02:04:10 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:06:56.177 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:56.177 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.127 ms 00:06:56.177 00:06:56.177 --- 10.0.0.2 ping statistics --- 00:06:56.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:56.177 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:06:56.177 02:04:10 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:06:56.177 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:56.177 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:06:56.177 00:06:56.177 --- 10.0.0.3 ping statistics --- 00:06:56.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:56.177 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:06:56.177 02:04:10 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:56.177 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:56.177 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:06:56.177 00:06:56.177 --- 10.0.0.1 ping statistics --- 00:06:56.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:56.177 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:06:56.177 02:04:10 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:56.177 02:04:10 -- nvmf/common.sh@421 -- # return 0 00:06:56.177 02:04:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:06:56.177 02:04:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:56.177 02:04:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:06:56.177 02:04:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:06:56.177 02:04:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:56.177 02:04:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:06:56.177 02:04:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:06:56.177 02:04:10 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:56.177 02:04:10 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:56.177 02:04:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:56.177 02:04:10 -- common/autotest_common.sh@10 -- # set +x 00:06:56.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.177 02:04:10 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:56.177 02:04:10 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:56.177 02:04:10 -- target/nvmf_example.sh@34 -- # nvmfpid=60010 00:06:56.177 02:04:10 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:56.177 02:04:10 -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:56.177 02:04:10 -- target/nvmf_example.sh@36 -- # waitforlisten 60010 00:06:56.177 02:04:10 -- common/autotest_common.sh@819 -- # '[' -z 60010 ']' 00:06:56.177 02:04:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.177 02:04:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:56.177 02:04:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.177 02:04:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:56.177 02:04:10 -- common/autotest_common.sh@10 -- # set +x 00:06:57.551 02:04:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:57.551 02:04:11 -- common/autotest_common.sh@852 -- # return 0 00:06:57.551 02:04:11 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:57.551 02:04:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:57.551 02:04:11 -- common/autotest_common.sh@10 -- # set +x 00:06:57.551 02:04:11 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:57.551 02:04:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:57.551 02:04:11 -- common/autotest_common.sh@10 -- # set +x 00:06:57.551 02:04:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:57.551 02:04:11 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:57.551 02:04:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:57.551 02:04:11 -- common/autotest_common.sh@10 -- # set +x 00:06:57.551 02:04:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:57.551 02:04:11 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:57.551 02:04:11 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:57.551 02:04:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:57.551 02:04:11 -- common/autotest_common.sh@10 -- # set +x 00:06:57.551 02:04:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:57.551 02:04:11 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:57.551 02:04:11 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:57.551 02:04:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:57.551 02:04:11 -- common/autotest_common.sh@10 -- # set +x 00:06:57.551 02:04:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:57.551 02:04:11 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:57.551 02:04:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:57.551 02:04:11 -- common/autotest_common.sh@10 -- # set +x 00:06:57.551 02:04:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:57.551 02:04:11 -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:06:57.551 02:04:11 -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:09.751 Initializing NVMe Controllers 00:07:09.751 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:09.751 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:09.751 Initialization complete. Launching workers. 00:07:09.751 ======================================================== 00:07:09.751 Latency(us) 00:07:09.751 Device Information : IOPS MiB/s Average min max 00:07:09.751 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14990.86 58.56 4268.95 721.17 23156.30 00:07:09.752 ======================================================== 00:07:09.752 Total : 14990.86 58.56 4268.95 721.17 23156.30 00:07:09.752 00:07:09.752 02:04:22 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:09.752 02:04:22 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:09.752 02:04:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:09.752 02:04:22 -- nvmf/common.sh@116 -- # sync 00:07:09.752 02:04:22 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:09.752 02:04:22 -- nvmf/common.sh@119 -- # set +e 00:07:09.752 02:04:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:09.752 02:04:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:09.752 rmmod nvme_tcp 00:07:09.752 rmmod nvme_fabrics 00:07:09.752 rmmod nvme_keyring 00:07:09.752 02:04:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:09.752 02:04:22 -- nvmf/common.sh@123 -- # set -e 00:07:09.752 02:04:22 -- nvmf/common.sh@124 -- # return 0 00:07:09.752 02:04:22 -- nvmf/common.sh@477 -- # '[' -n 60010 ']' 00:07:09.752 02:04:22 -- nvmf/common.sh@478 -- # killprocess 60010 00:07:09.752 02:04:22 -- common/autotest_common.sh@926 -- # '[' -z 60010 ']' 00:07:09.752 02:04:22 -- common/autotest_common.sh@930 -- # kill -0 60010 00:07:09.752 02:04:22 -- common/autotest_common.sh@931 -- # uname 00:07:09.752 02:04:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:09.752 02:04:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 60010 00:07:09.752 killing process with pid 60010 00:07:09.752 02:04:22 -- common/autotest_common.sh@932 -- # process_name=nvmf 00:07:09.752 02:04:22 -- common/autotest_common.sh@936 -- # '[' nvmf = sudo ']' 00:07:09.752 02:04:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 60010' 00:07:09.752 02:04:22 -- common/autotest_common.sh@945 -- # kill 60010 00:07:09.752 02:04:22 -- common/autotest_common.sh@950 -- # wait 60010 00:07:09.752 nvmf threads initialize successfully 00:07:09.752 bdev subsystem init successfully 00:07:09.752 created a nvmf target service 00:07:09.752 create targets's poll groups done 00:07:09.752 all subsystems of target started 00:07:09.752 nvmf target is running 00:07:09.752 all subsystems of target stopped 00:07:09.752 destroy targets's poll groups done 00:07:09.752 destroyed the nvmf target service 00:07:09.752 bdev subsystem finish successfully 00:07:09.752 nvmf threads destroy successfully 00:07:09.752 02:04:22 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:09.752 02:04:22 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:09.752 02:04:22 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:09.752 02:04:22 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:09.752 02:04:22 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:09.752 02:04:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:09.752 02:04:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:09.752 02:04:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:09.752 02:04:22 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:07:09.752 02:04:22 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:09.752 02:04:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:09.752 02:04:22 -- common/autotest_common.sh@10 -- # set +x 00:07:09.752 00:07:09.752 real 0m12.266s 00:07:09.752 user 0m44.362s 00:07:09.752 sys 0m1.812s 00:07:09.752 02:04:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.752 02:04:22 -- common/autotest_common.sh@10 -- # set +x 00:07:09.752 ************************************ 00:07:09.752 END TEST nvmf_example 00:07:09.752 ************************************ 00:07:09.752 02:04:22 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:09.752 02:04:22 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:09.752 02:04:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:09.752 02:04:22 -- common/autotest_common.sh@10 -- # set +x 00:07:09.752 ************************************ 00:07:09.752 START TEST nvmf_filesystem 00:07:09.752 ************************************ 00:07:09.752 02:04:22 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:09.752 * Looking for test storage... 00:07:09.752 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:09.752 02:04:22 -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:07:09.752 02:04:22 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:09.752 02:04:22 -- common/autotest_common.sh@34 -- # set -e 00:07:09.752 02:04:22 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:09.752 02:04:22 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:09.752 02:04:22 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:07:09.752 02:04:22 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:07:09.752 02:04:22 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:09.752 02:04:22 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:09.752 02:04:22 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:09.752 02:04:22 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:09.752 02:04:22 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:07:09.752 02:04:22 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:09.752 02:04:22 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:09.752 02:04:22 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:09.752 02:04:22 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:09.752 02:04:22 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:09.752 02:04:22 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:09.752 02:04:22 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:09.752 02:04:22 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:09.752 02:04:22 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:09.752 02:04:22 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:09.752 02:04:22 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:09.752 02:04:22 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:09.752 02:04:22 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:09.752 02:04:22 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:09.752 02:04:22 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:09.752 02:04:22 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:09.752 02:04:22 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:09.752 02:04:22 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:09.752 02:04:22 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:09.752 02:04:22 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:09.752 02:04:22 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:09.752 02:04:22 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:09.752 02:04:22 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:09.752 02:04:22 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:09.752 02:04:22 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:09.752 02:04:22 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:09.752 02:04:22 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:09.752 02:04:22 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:09.752 02:04:22 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:09.752 02:04:22 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:09.752 02:04:22 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:07:09.752 02:04:22 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:09.752 02:04:22 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:09.752 02:04:22 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:09.752 02:04:22 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:09.752 02:04:22 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:09.752 02:04:22 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:09.752 02:04:22 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:09.752 02:04:22 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:09.752 02:04:22 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:09.752 02:04:22 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:07:09.752 02:04:22 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:07:09.752 02:04:22 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:09.752 02:04:22 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:07:09.752 02:04:22 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:07:09.752 02:04:22 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:07:09.752 02:04:22 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:07:09.752 02:04:22 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:07:09.752 02:04:22 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:07:09.752 02:04:22 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:07:09.752 02:04:22 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:07:09.752 02:04:22 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:07:09.752 02:04:22 -- common/build_config.sh@58 -- # CONFIG_GOLANG=y 00:07:09.752 02:04:22 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:07:09.752 02:04:22 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=n 00:07:09.752 02:04:22 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:07:09.752 02:04:22 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:07:09.752 02:04:22 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:07:09.752 02:04:22 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:07:09.752 02:04:22 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:07:09.752 02:04:22 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:09.752 02:04:22 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:07:09.752 02:04:22 -- common/build_config.sh@68 -- # CONFIG_AVAHI=y 00:07:09.752 02:04:22 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:07:09.752 02:04:22 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:07:09.752 02:04:22 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:07:09.752 02:04:22 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:07:09.752 02:04:22 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:07:09.752 02:04:22 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:07:09.752 02:04:22 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:07:09.752 02:04:22 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:07:09.752 02:04:22 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:09.752 02:04:22 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:07:09.752 02:04:22 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:07:09.752 02:04:22 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:07:09.752 02:04:22 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:07:09.752 02:04:22 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:07:09.752 02:04:22 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:07:09.753 02:04:22 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:07:09.753 02:04:22 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:07:09.753 02:04:22 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:07:09.753 02:04:22 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:07:09.753 02:04:22 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:09.753 02:04:22 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:09.753 02:04:22 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:09.753 02:04:22 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:09.753 02:04:22 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:09.753 02:04:22 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:09.753 02:04:22 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:07:09.753 02:04:22 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:09.753 #define SPDK_CONFIG_H 00:07:09.753 #define SPDK_CONFIG_APPS 1 00:07:09.753 #define SPDK_CONFIG_ARCH native 00:07:09.753 #undef SPDK_CONFIG_ASAN 00:07:09.753 #define SPDK_CONFIG_AVAHI 1 00:07:09.753 #undef SPDK_CONFIG_CET 00:07:09.753 #define SPDK_CONFIG_COVERAGE 1 00:07:09.753 #define SPDK_CONFIG_CROSS_PREFIX 00:07:09.753 #undef SPDK_CONFIG_CRYPTO 00:07:09.753 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:09.753 #undef SPDK_CONFIG_CUSTOMOCF 00:07:09.753 #undef SPDK_CONFIG_DAOS 00:07:09.753 #define SPDK_CONFIG_DAOS_DIR 00:07:09.753 #define SPDK_CONFIG_DEBUG 1 00:07:09.753 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:09.753 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:07:09.753 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:09.753 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:09.753 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:09.753 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:09.753 #define SPDK_CONFIG_EXAMPLES 1 00:07:09.753 #undef SPDK_CONFIG_FC 00:07:09.753 #define SPDK_CONFIG_FC_PATH 00:07:09.753 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:09.753 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:09.753 #undef SPDK_CONFIG_FUSE 00:07:09.753 #undef SPDK_CONFIG_FUZZER 00:07:09.753 #define SPDK_CONFIG_FUZZER_LIB 00:07:09.753 #define SPDK_CONFIG_GOLANG 1 00:07:09.753 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:09.753 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:09.753 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:09.753 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:09.753 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:09.753 #define SPDK_CONFIG_IDXD 1 00:07:09.753 #undef SPDK_CONFIG_IDXD_KERNEL 00:07:09.753 #undef SPDK_CONFIG_IPSEC_MB 00:07:09.753 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:09.753 #define SPDK_CONFIG_ISAL 1 00:07:09.753 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:09.753 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:09.753 #define SPDK_CONFIG_LIBDIR 00:07:09.753 #undef SPDK_CONFIG_LTO 00:07:09.753 #define SPDK_CONFIG_MAX_LCORES 00:07:09.753 #define SPDK_CONFIG_NVME_CUSE 1 00:07:09.753 #undef SPDK_CONFIG_OCF 00:07:09.753 #define SPDK_CONFIG_OCF_PATH 00:07:09.753 #define SPDK_CONFIG_OPENSSL_PATH 00:07:09.753 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:09.753 #undef SPDK_CONFIG_PGO_USE 00:07:09.753 #define SPDK_CONFIG_PREFIX /usr/local 00:07:09.753 #undef SPDK_CONFIG_RAID5F 00:07:09.753 #undef SPDK_CONFIG_RBD 00:07:09.753 #define SPDK_CONFIG_RDMA 1 00:07:09.753 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:09.753 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:09.753 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:09.753 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:09.753 #define SPDK_CONFIG_SHARED 1 00:07:09.753 #undef SPDK_CONFIG_SMA 00:07:09.753 #define SPDK_CONFIG_TESTS 1 00:07:09.753 #undef SPDK_CONFIG_TSAN 00:07:09.753 #define SPDK_CONFIG_UBLK 1 00:07:09.753 #define SPDK_CONFIG_UBSAN 1 00:07:09.753 #undef SPDK_CONFIG_UNIT_TESTS 00:07:09.753 #undef SPDK_CONFIG_URING 00:07:09.753 #define SPDK_CONFIG_URING_PATH 00:07:09.753 #undef SPDK_CONFIG_URING_ZNS 00:07:09.753 #define SPDK_CONFIG_USDT 1 00:07:09.753 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:09.753 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:09.753 #define SPDK_CONFIG_VFIO_USER 1 00:07:09.753 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:09.753 #define SPDK_CONFIG_VHOST 1 00:07:09.753 #define SPDK_CONFIG_VIRTIO 1 00:07:09.753 #undef SPDK_CONFIG_VTUNE 00:07:09.753 #define SPDK_CONFIG_VTUNE_DIR 00:07:09.753 #define SPDK_CONFIG_WERROR 1 00:07:09.753 #define SPDK_CONFIG_WPDK_DIR 00:07:09.753 #undef SPDK_CONFIG_XNVME 00:07:09.753 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:09.753 02:04:22 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:09.753 02:04:22 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:09.753 02:04:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:09.753 02:04:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:09.753 02:04:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:09.753 02:04:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.753 02:04:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.753 02:04:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.753 02:04:22 -- paths/export.sh@5 -- # export PATH 00:07:09.753 02:04:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.753 02:04:22 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:07:09.753 02:04:22 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:07:09.753 02:04:22 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:07:09.753 02:04:22 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:07:09.753 02:04:22 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:07:09.753 02:04:22 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:07:09.753 02:04:22 -- pm/common@16 -- # TEST_TAG=N/A 00:07:09.753 02:04:22 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:07:09.753 02:04:22 -- common/autotest_common.sh@52 -- # : 1 00:07:09.753 02:04:22 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:07:09.753 02:04:22 -- common/autotest_common.sh@56 -- # : 0 00:07:09.753 02:04:22 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:09.753 02:04:22 -- common/autotest_common.sh@58 -- # : 0 00:07:09.753 02:04:22 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:07:09.753 02:04:22 -- common/autotest_common.sh@60 -- # : 1 00:07:09.753 02:04:22 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:09.753 02:04:22 -- common/autotest_common.sh@62 -- # : 0 00:07:09.753 02:04:22 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:07:09.753 02:04:22 -- common/autotest_common.sh@64 -- # : 00:07:09.753 02:04:22 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:07:09.753 02:04:22 -- common/autotest_common.sh@66 -- # : 0 00:07:09.753 02:04:22 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:07:09.753 02:04:22 -- common/autotest_common.sh@68 -- # : 0 00:07:09.753 02:04:22 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:07:09.753 02:04:22 -- common/autotest_common.sh@70 -- # : 0 00:07:09.753 02:04:22 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:07:09.753 02:04:22 -- common/autotest_common.sh@72 -- # : 0 00:07:09.753 02:04:22 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:09.753 02:04:22 -- common/autotest_common.sh@74 -- # : 0 00:07:09.753 02:04:22 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:07:09.753 02:04:22 -- common/autotest_common.sh@76 -- # : 0 00:07:09.753 02:04:22 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:07:09.753 02:04:22 -- common/autotest_common.sh@78 -- # : 0 00:07:09.753 02:04:22 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:07:09.753 02:04:22 -- common/autotest_common.sh@80 -- # : 0 00:07:09.753 02:04:22 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:07:09.753 02:04:22 -- common/autotest_common.sh@82 -- # : 0 00:07:09.753 02:04:22 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:07:09.753 02:04:22 -- common/autotest_common.sh@84 -- # : 0 00:07:09.753 02:04:22 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:07:09.753 02:04:22 -- common/autotest_common.sh@86 -- # : 1 00:07:09.753 02:04:22 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:07:09.753 02:04:22 -- common/autotest_common.sh@88 -- # : 1 00:07:09.753 02:04:22 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:07:09.753 02:04:22 -- common/autotest_common.sh@90 -- # : 0 00:07:09.753 02:04:22 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:09.753 02:04:22 -- common/autotest_common.sh@92 -- # : 0 00:07:09.753 02:04:22 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:07:09.753 02:04:22 -- common/autotest_common.sh@94 -- # : 0 00:07:09.753 02:04:22 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:07:09.753 02:04:22 -- common/autotest_common.sh@96 -- # : tcp 00:07:09.753 02:04:22 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:09.753 02:04:22 -- common/autotest_common.sh@98 -- # : 0 00:07:09.753 02:04:22 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:07:09.753 02:04:22 -- common/autotest_common.sh@100 -- # : 0 00:07:09.753 02:04:22 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:07:09.754 02:04:22 -- common/autotest_common.sh@102 -- # : 0 00:07:09.754 02:04:22 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:07:09.754 02:04:22 -- common/autotest_common.sh@104 -- # : 0 00:07:09.754 02:04:22 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:07:09.754 02:04:22 -- common/autotest_common.sh@106 -- # : 0 00:07:09.754 02:04:22 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:07:09.754 02:04:22 -- common/autotest_common.sh@108 -- # : 0 00:07:09.754 02:04:22 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:07:09.754 02:04:22 -- common/autotest_common.sh@110 -- # : 0 00:07:09.754 02:04:22 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:07:09.754 02:04:22 -- common/autotest_common.sh@112 -- # : 0 00:07:09.754 02:04:22 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:09.754 02:04:22 -- common/autotest_common.sh@114 -- # : 0 00:07:09.754 02:04:22 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:07:09.754 02:04:22 -- common/autotest_common.sh@116 -- # : 1 00:07:09.754 02:04:22 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:07:09.754 02:04:22 -- common/autotest_common.sh@118 -- # : 00:07:09.754 02:04:22 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:09.754 02:04:22 -- common/autotest_common.sh@120 -- # : 0 00:07:09.754 02:04:22 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:07:09.754 02:04:22 -- common/autotest_common.sh@122 -- # : 0 00:07:09.754 02:04:22 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:07:09.754 02:04:22 -- common/autotest_common.sh@124 -- # : 0 00:07:09.754 02:04:22 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:07:09.754 02:04:22 -- common/autotest_common.sh@126 -- # : 0 00:07:09.754 02:04:22 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:07:09.754 02:04:22 -- common/autotest_common.sh@128 -- # : 0 00:07:09.754 02:04:22 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:07:09.754 02:04:22 -- common/autotest_common.sh@130 -- # : 0 00:07:09.754 02:04:22 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:07:09.754 02:04:22 -- common/autotest_common.sh@132 -- # : 00:07:09.754 02:04:22 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:07:09.754 02:04:22 -- common/autotest_common.sh@134 -- # : true 00:07:09.754 02:04:22 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:07:09.754 02:04:22 -- common/autotest_common.sh@136 -- # : 0 00:07:09.754 02:04:22 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:07:09.754 02:04:22 -- common/autotest_common.sh@138 -- # : 0 00:07:09.754 02:04:22 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:07:09.754 02:04:22 -- common/autotest_common.sh@140 -- # : 1 00:07:09.754 02:04:22 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:07:09.754 02:04:22 -- common/autotest_common.sh@142 -- # : 0 00:07:09.754 02:04:22 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:07:09.754 02:04:22 -- common/autotest_common.sh@144 -- # : 0 00:07:09.754 02:04:22 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:07:09.754 02:04:22 -- common/autotest_common.sh@146 -- # : 0 00:07:09.754 02:04:22 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:07:09.754 02:04:22 -- common/autotest_common.sh@148 -- # : 00:07:09.754 02:04:22 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:07:09.754 02:04:22 -- common/autotest_common.sh@150 -- # : 0 00:07:09.754 02:04:22 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:07:09.754 02:04:22 -- common/autotest_common.sh@152 -- # : 0 00:07:09.754 02:04:22 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:07:09.754 02:04:22 -- common/autotest_common.sh@154 -- # : 0 00:07:09.754 02:04:22 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:07:09.754 02:04:22 -- common/autotest_common.sh@156 -- # : 0 00:07:09.754 02:04:22 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:07:09.754 02:04:22 -- common/autotest_common.sh@158 -- # : 0 00:07:09.754 02:04:22 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:07:09.754 02:04:22 -- common/autotest_common.sh@160 -- # : 0 00:07:09.754 02:04:22 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:07:09.754 02:04:22 -- common/autotest_common.sh@163 -- # : 00:07:09.754 02:04:22 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:07:09.754 02:04:22 -- common/autotest_common.sh@165 -- # : 1 00:07:09.754 02:04:22 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:07:09.754 02:04:22 -- common/autotest_common.sh@167 -- # : 1 00:07:09.754 02:04:22 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:09.754 02:04:22 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:07:09.754 02:04:22 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:07:09.754 02:04:22 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:07:09.754 02:04:22 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:07:09.754 02:04:22 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:09.754 02:04:22 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:09.754 02:04:22 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:09.754 02:04:22 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:09.754 02:04:22 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:09.754 02:04:22 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:09.754 02:04:22 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:09.754 02:04:22 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:09.754 02:04:22 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:09.754 02:04:22 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:07:09.754 02:04:22 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:09.754 02:04:22 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:09.754 02:04:22 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:09.754 02:04:22 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:09.754 02:04:22 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:09.754 02:04:22 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:07:09.754 02:04:22 -- common/autotest_common.sh@196 -- # cat 00:07:09.754 02:04:22 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:07:09.754 02:04:22 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:09.754 02:04:22 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:09.754 02:04:22 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:09.754 02:04:22 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:09.754 02:04:22 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:07:09.754 02:04:22 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:07:09.754 02:04:22 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:07:09.754 02:04:22 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:07:09.754 02:04:22 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:07:09.754 02:04:22 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:07:09.754 02:04:22 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:09.754 02:04:22 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:09.754 02:04:22 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:09.754 02:04:22 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:09.754 02:04:22 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:07:09.754 02:04:22 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:07:09.754 02:04:22 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:09.754 02:04:22 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:09.754 02:04:22 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:07:09.754 02:04:22 -- common/autotest_common.sh@249 -- # export valgrind= 00:07:09.754 02:04:22 -- common/autotest_common.sh@249 -- # valgrind= 00:07:09.754 02:04:22 -- common/autotest_common.sh@255 -- # uname -s 00:07:09.754 02:04:22 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:07:09.754 02:04:22 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:07:09.754 02:04:22 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:07:09.754 02:04:22 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:07:09.754 02:04:22 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:07:09.754 02:04:22 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:07:09.754 02:04:22 -- common/autotest_common.sh@265 -- # MAKE=make 00:07:09.754 02:04:22 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j10 00:07:09.754 02:04:22 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:07:09.754 02:04:22 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:07:09.754 02:04:22 -- common/autotest_common.sh@284 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:07:09.754 02:04:22 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:07:09.754 02:04:22 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:07:09.754 02:04:22 -- common/autotest_common.sh@291 -- # for i in "$@" 00:07:09.754 02:04:22 -- common/autotest_common.sh@292 -- # case "$i" in 00:07:09.754 02:04:22 -- common/autotest_common.sh@297 -- # TEST_TRANSPORT=tcp 00:07:09.754 02:04:22 -- common/autotest_common.sh@309 -- # [[ -z 60249 ]] 00:07:09.755 02:04:22 -- common/autotest_common.sh@309 -- # kill -0 60249 00:07:09.755 02:04:22 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:07:09.755 02:04:22 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:07:09.755 02:04:22 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:07:09.755 02:04:22 -- common/autotest_common.sh@322 -- # local mount target_dir 00:07:09.755 02:04:22 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:07:09.755 02:04:22 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:07:09.755 02:04:22 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:07:09.755 02:04:22 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:07:09.755 02:04:22 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.yBqjfM 00:07:09.755 02:04:22 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:09.755 02:04:22 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:07:09.755 02:04:22 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:07:09.755 02:04:22 -- common/autotest_common.sh@346 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.yBqjfM/tests/target /tmp/spdk.yBqjfM 00:07:09.755 02:04:22 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:07:09.755 02:04:22 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:09.755 02:04:22 -- common/autotest_common.sh@318 -- # df -T 00:07:09.755 02:04:22 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:07:09.755 02:04:22 -- common/autotest_common.sh@352 -- # mounts["$mount"]=devtmpfs 00:07:09.755 02:04:22 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:07:09.755 02:04:22 -- common/autotest_common.sh@353 -- # avails["$mount"]=4194304 00:07:09.755 02:04:22 -- common/autotest_common.sh@353 -- # sizes["$mount"]=4194304 00:07:09.755 02:04:22 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:07:09.755 02:04:22 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:09.755 02:04:22 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:07:09.755 02:04:22 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:07:09.755 02:04:22 -- common/autotest_common.sh@353 -- # avails["$mount"]=6266634240 00:07:09.755 02:04:22 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6267891712 00:07:09.755 02:04:22 -- common/autotest_common.sh@354 -- # uses["$mount"]=1257472 00:07:09.755 02:04:22 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:09.755 02:04:22 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:07:09.755 02:04:22 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:07:09.755 02:04:22 -- common/autotest_common.sh@353 -- # avails["$mount"]=2494353408 00:07:09.755 02:04:22 -- common/autotest_common.sh@353 -- # sizes["$mount"]=2507157504 00:07:09.755 02:04:22 -- common/autotest_common.sh@354 -- # uses["$mount"]=12804096 00:07:09.755 02:04:22 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:09.755 02:04:22 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda5 00:07:09.755 02:04:22 -- common/autotest_common.sh@352 -- # fss["$mount"]=btrfs 00:07:09.755 02:04:22 -- common/autotest_common.sh@353 -- # avails["$mount"]=13810065408 00:07:09.755 02:04:22 -- common/autotest_common.sh@353 -- # sizes["$mount"]=20314062848 00:07:09.755 02:04:22 -- common/autotest_common.sh@354 -- # uses["$mount"]=5214265344 00:07:09.755 02:04:22 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:09.755 02:04:22 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda5 00:07:09.755 02:04:22 -- common/autotest_common.sh@352 -- # fss["$mount"]=btrfs 00:07:09.755 02:04:22 -- common/autotest_common.sh@353 -- # avails["$mount"]=13810065408 00:07:09.755 02:04:22 -- common/autotest_common.sh@353 -- # sizes["$mount"]=20314062848 00:07:09.755 02:04:22 -- common/autotest_common.sh@354 -- # uses["$mount"]=5214265344 00:07:09.755 02:04:22 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:09.755 02:04:22 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda2 00:07:09.755 02:04:22 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext4 00:07:09.755 02:04:22 -- common/autotest_common.sh@353 -- # avails["$mount"]=843546624 00:07:09.755 02:04:22 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1012768768 00:07:09.755 02:04:22 -- common/autotest_common.sh@354 -- # uses["$mount"]=100016128 00:07:09.755 02:04:22 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:09.755 02:04:22 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda3 00:07:09.755 02:04:22 -- common/autotest_common.sh@352 -- # fss["$mount"]=vfat 00:07:09.755 02:04:22 -- common/autotest_common.sh@353 -- # avails["$mount"]=92499968 00:07:09.755 02:04:22 -- common/autotest_common.sh@353 -- # sizes["$mount"]=104607744 00:07:09.755 02:04:22 -- common/autotest_common.sh@354 -- # uses["$mount"]=12107776 00:07:09.755 02:04:22 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:09.755 02:04:22 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:07:09.755 02:04:22 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:07:09.755 02:04:22 -- common/autotest_common.sh@353 -- # avails["$mount"]=6267756544 00:07:09.755 02:04:22 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6267891712 00:07:09.755 02:04:22 -- common/autotest_common.sh@354 -- # uses["$mount"]=135168 00:07:09.755 02:04:22 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:09.755 02:04:22 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:07:09.755 02:04:22 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:07:09.755 02:04:22 -- common/autotest_common.sh@353 -- # avails["$mount"]=1253572608 00:07:09.755 02:04:22 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1253576704 00:07:09.755 02:04:22 -- common/autotest_common.sh@354 -- # uses["$mount"]=4096 00:07:09.755 02:04:22 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:09.755 02:04:22 -- common/autotest_common.sh@352 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt/output 00:07:09.755 02:04:22 -- common/autotest_common.sh@352 -- # fss["$mount"]=fuse.sshfs 00:07:09.755 02:04:22 -- common/autotest_common.sh@353 -- # avails["$mount"]=95251161088 00:07:09.755 02:04:22 -- common/autotest_common.sh@353 -- # sizes["$mount"]=105088212992 00:07:09.755 02:04:22 -- common/autotest_common.sh@354 -- # uses["$mount"]=4451618816 00:07:09.755 02:04:22 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:09.755 02:04:22 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:07:09.755 * Looking for test storage... 00:07:09.755 02:04:22 -- common/autotest_common.sh@359 -- # local target_space new_size 00:07:09.755 02:04:22 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:07:09.755 02:04:22 -- common/autotest_common.sh@363 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:09.755 02:04:22 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:09.755 02:04:22 -- common/autotest_common.sh@363 -- # mount=/home 00:07:09.755 02:04:22 -- common/autotest_common.sh@365 -- # target_space=13810065408 00:07:09.755 02:04:22 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:07:09.755 02:04:22 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:07:09.755 02:04:22 -- common/autotest_common.sh@371 -- # [[ btrfs == tmpfs ]] 00:07:09.755 02:04:22 -- common/autotest_common.sh@371 -- # [[ btrfs == ramfs ]] 00:07:09.755 02:04:22 -- common/autotest_common.sh@371 -- # [[ /home == / ]] 00:07:09.755 02:04:22 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:09.755 02:04:22 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:09.755 02:04:22 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:09.755 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:09.755 02:04:22 -- common/autotest_common.sh@380 -- # return 0 00:07:09.755 02:04:22 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:07:09.755 02:04:22 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:07:09.755 02:04:22 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:09.755 02:04:22 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:09.755 02:04:22 -- common/autotest_common.sh@1672 -- # true 00:07:09.755 02:04:22 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:07:09.755 02:04:22 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:09.755 02:04:22 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:09.755 02:04:22 -- common/autotest_common.sh@27 -- # exec 00:07:09.755 02:04:22 -- common/autotest_common.sh@29 -- # exec 00:07:09.755 02:04:22 -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:09.755 02:04:22 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:09.755 02:04:22 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:09.755 02:04:22 -- common/autotest_common.sh@18 -- # set -x 00:07:09.755 02:04:22 -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:09.755 02:04:22 -- nvmf/common.sh@7 -- # uname -s 00:07:09.755 02:04:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:09.755 02:04:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:09.755 02:04:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:09.755 02:04:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:09.755 02:04:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:09.755 02:04:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:09.755 02:04:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:09.755 02:04:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:09.755 02:04:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:09.755 02:04:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:09.755 02:04:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:07:09.755 02:04:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:07:09.755 02:04:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:09.755 02:04:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:09.755 02:04:22 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:09.755 02:04:22 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:09.755 02:04:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:09.755 02:04:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:09.755 02:04:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:09.755 02:04:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.756 02:04:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.756 02:04:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.756 02:04:22 -- paths/export.sh@5 -- # export PATH 00:07:09.756 02:04:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.756 02:04:22 -- nvmf/common.sh@46 -- # : 0 00:07:09.756 02:04:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:09.756 02:04:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:09.756 02:04:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:09.756 02:04:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:09.756 02:04:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:09.756 02:04:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:09.756 02:04:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:09.756 02:04:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:09.756 02:04:22 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:09.756 02:04:22 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:09.756 02:04:22 -- target/filesystem.sh@15 -- # nvmftestinit 00:07:09.756 02:04:22 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:09.756 02:04:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:09.756 02:04:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:09.756 02:04:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:09.756 02:04:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:09.756 02:04:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:09.756 02:04:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:09.756 02:04:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:09.756 02:04:22 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:07:09.756 02:04:22 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:07:09.756 02:04:22 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:07:09.756 02:04:22 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:07:09.756 02:04:22 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:07:09.756 02:04:22 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:07:09.756 02:04:22 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:09.756 02:04:22 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:09.756 02:04:22 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:09.756 02:04:22 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:07:09.756 02:04:22 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:09.756 02:04:22 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:09.756 02:04:22 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:09.756 02:04:22 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:09.756 02:04:22 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:09.756 02:04:22 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:09.756 02:04:22 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:09.756 02:04:22 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:09.756 02:04:22 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:07:09.756 02:04:22 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:07:09.756 Cannot find device "nvmf_tgt_br" 00:07:09.756 02:04:22 -- nvmf/common.sh@154 -- # true 00:07:09.756 02:04:22 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:07:09.756 Cannot find device "nvmf_tgt_br2" 00:07:09.756 02:04:22 -- nvmf/common.sh@155 -- # true 00:07:09.756 02:04:22 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:07:09.756 02:04:22 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:07:09.756 Cannot find device "nvmf_tgt_br" 00:07:09.756 02:04:22 -- nvmf/common.sh@157 -- # true 00:07:09.756 02:04:22 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:07:09.756 Cannot find device "nvmf_tgt_br2" 00:07:09.756 02:04:22 -- nvmf/common.sh@158 -- # true 00:07:09.756 02:04:22 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:07:09.756 02:04:22 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:07:09.756 02:04:22 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:09.756 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:09.756 02:04:22 -- nvmf/common.sh@161 -- # true 00:07:09.756 02:04:22 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:09.756 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:09.756 02:04:22 -- nvmf/common.sh@162 -- # true 00:07:09.756 02:04:22 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:07:09.756 02:04:22 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:09.756 02:04:22 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:09.756 02:04:22 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:09.756 02:04:22 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:09.756 02:04:22 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:09.756 02:04:22 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:09.756 02:04:22 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:09.756 02:04:22 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:09.756 02:04:22 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:07:09.756 02:04:22 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:07:09.756 02:04:22 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:07:09.756 02:04:22 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:07:09.756 02:04:22 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:09.756 02:04:22 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:09.756 02:04:22 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:09.756 02:04:22 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:07:09.756 02:04:22 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:07:09.756 02:04:22 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:07:09.756 02:04:23 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:09.756 02:04:23 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:09.756 02:04:23 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:09.756 02:04:23 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:09.756 02:04:23 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:07:09.756 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:09.756 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:07:09.756 00:07:09.756 --- 10.0.0.2 ping statistics --- 00:07:09.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:09.756 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:07:09.756 02:04:23 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:07:09.756 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:09.756 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:07:09.756 00:07:09.756 --- 10.0.0.3 ping statistics --- 00:07:09.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:09.756 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:07:09.756 02:04:23 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:09.756 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:09.756 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:07:09.756 00:07:09.756 --- 10.0.0.1 ping statistics --- 00:07:09.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:09.756 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:07:09.756 02:04:23 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:09.756 02:04:23 -- nvmf/common.sh@421 -- # return 0 00:07:09.756 02:04:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:09.756 02:04:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:09.756 02:04:23 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:09.756 02:04:23 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:09.757 02:04:23 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:09.757 02:04:23 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:09.757 02:04:23 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:09.757 02:04:23 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:09.757 02:04:23 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:09.757 02:04:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:09.757 02:04:23 -- common/autotest_common.sh@10 -- # set +x 00:07:09.757 ************************************ 00:07:09.757 START TEST nvmf_filesystem_no_in_capsule 00:07:09.757 ************************************ 00:07:09.757 02:04:23 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 0 00:07:09.757 02:04:23 -- target/filesystem.sh@47 -- # in_capsule=0 00:07:09.757 02:04:23 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:09.757 02:04:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:09.757 02:04:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:09.757 02:04:23 -- common/autotest_common.sh@10 -- # set +x 00:07:09.757 02:04:23 -- nvmf/common.sh@469 -- # nvmfpid=60412 00:07:09.757 02:04:23 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:09.757 02:04:23 -- nvmf/common.sh@470 -- # waitforlisten 60412 00:07:09.757 02:04:23 -- common/autotest_common.sh@819 -- # '[' -z 60412 ']' 00:07:09.757 02:04:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.757 02:04:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:09.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.757 02:04:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.757 02:04:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:09.757 02:04:23 -- common/autotest_common.sh@10 -- # set +x 00:07:09.757 [2024-05-14 02:04:23.140483] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:09.757 [2024-05-14 02:04:23.140572] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:09.757 [2024-05-14 02:04:23.278560] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:09.757 [2024-05-14 02:04:23.370432] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:09.757 [2024-05-14 02:04:23.370874] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:09.757 [2024-05-14 02:04:23.371035] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:09.757 [2024-05-14 02:04:23.371261] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:09.757 [2024-05-14 02:04:23.371609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.757 [2024-05-14 02:04:23.371716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:09.757 [2024-05-14 02:04:23.371877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:09.757 [2024-05-14 02:04:23.371883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.757 02:04:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:09.757 02:04:24 -- common/autotest_common.sh@852 -- # return 0 00:07:09.757 02:04:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:09.757 02:04:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:09.757 02:04:24 -- common/autotest_common.sh@10 -- # set +x 00:07:09.757 02:04:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:09.757 02:04:24 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:09.757 02:04:24 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:09.757 02:04:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:09.757 02:04:24 -- common/autotest_common.sh@10 -- # set +x 00:07:09.757 [2024-05-14 02:04:24.186074] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:09.757 02:04:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:09.757 02:04:24 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:09.757 02:04:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:09.757 02:04:24 -- common/autotest_common.sh@10 -- # set +x 00:07:09.757 Malloc1 00:07:09.757 02:04:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:09.757 02:04:24 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:09.757 02:04:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:09.757 02:04:24 -- common/autotest_common.sh@10 -- # set +x 00:07:09.757 02:04:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:09.757 02:04:24 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:09.757 02:04:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:09.757 02:04:24 -- common/autotest_common.sh@10 -- # set +x 00:07:09.757 02:04:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:09.757 02:04:24 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:09.757 02:04:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:09.757 02:04:24 -- common/autotest_common.sh@10 -- # set +x 00:07:09.757 [2024-05-14 02:04:24.327856] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:09.757 02:04:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:09.757 02:04:24 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:09.757 02:04:24 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:07:09.757 02:04:24 -- common/autotest_common.sh@1358 -- # local bdev_info 00:07:09.757 02:04:24 -- common/autotest_common.sh@1359 -- # local bs 00:07:09.757 02:04:24 -- common/autotest_common.sh@1360 -- # local nb 00:07:09.757 02:04:24 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:09.757 02:04:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:09.757 02:04:24 -- common/autotest_common.sh@10 -- # set +x 00:07:10.015 02:04:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:10.015 02:04:24 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:07:10.015 { 00:07:10.015 "aliases": [ 00:07:10.015 "b1677f08-d1e6-471f-8d9b-d3aad2a6283e" 00:07:10.015 ], 00:07:10.015 "assigned_rate_limits": { 00:07:10.015 "r_mbytes_per_sec": 0, 00:07:10.015 "rw_ios_per_sec": 0, 00:07:10.015 "rw_mbytes_per_sec": 0, 00:07:10.015 "w_mbytes_per_sec": 0 00:07:10.015 }, 00:07:10.015 "block_size": 512, 00:07:10.015 "claim_type": "exclusive_write", 00:07:10.015 "claimed": true, 00:07:10.015 "driver_specific": {}, 00:07:10.015 "memory_domains": [ 00:07:10.015 { 00:07:10.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:10.015 "dma_device_type": 2 00:07:10.015 } 00:07:10.015 ], 00:07:10.015 "name": "Malloc1", 00:07:10.015 "num_blocks": 1048576, 00:07:10.015 "product_name": "Malloc disk", 00:07:10.015 "supported_io_types": { 00:07:10.015 "abort": true, 00:07:10.015 "compare": false, 00:07:10.015 "compare_and_write": false, 00:07:10.015 "flush": true, 00:07:10.015 "nvme_admin": false, 00:07:10.015 "nvme_io": false, 00:07:10.015 "read": true, 00:07:10.015 "reset": true, 00:07:10.015 "unmap": true, 00:07:10.015 "write": true, 00:07:10.015 "write_zeroes": true 00:07:10.015 }, 00:07:10.015 "uuid": "b1677f08-d1e6-471f-8d9b-d3aad2a6283e", 00:07:10.015 "zoned": false 00:07:10.015 } 00:07:10.015 ]' 00:07:10.015 02:04:24 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:07:10.015 02:04:24 -- common/autotest_common.sh@1362 -- # bs=512 00:07:10.015 02:04:24 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:07:10.015 02:04:24 -- common/autotest_common.sh@1363 -- # nb=1048576 00:07:10.015 02:04:24 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:07:10.015 02:04:24 -- common/autotest_common.sh@1367 -- # echo 512 00:07:10.015 02:04:24 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:10.015 02:04:24 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 --hostid=01bebc16-ee64-4b1b-82ac-462e1640a9a9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:10.272 02:04:24 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:10.272 02:04:24 -- common/autotest_common.sh@1177 -- # local i=0 00:07:10.272 02:04:24 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:07:10.272 02:04:24 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:07:10.272 02:04:24 -- common/autotest_common.sh@1184 -- # sleep 2 00:07:12.170 02:04:26 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:07:12.170 02:04:26 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:07:12.170 02:04:26 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:07:12.170 02:04:26 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:07:12.170 02:04:26 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:07:12.170 02:04:26 -- common/autotest_common.sh@1187 -- # return 0 00:07:12.170 02:04:26 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:12.170 02:04:26 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:12.170 02:04:26 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:12.170 02:04:26 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:12.170 02:04:26 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:12.170 02:04:26 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:12.170 02:04:26 -- setup/common.sh@80 -- # echo 536870912 00:07:12.170 02:04:26 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:12.170 02:04:26 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:12.170 02:04:26 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:12.170 02:04:26 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:12.170 02:04:26 -- target/filesystem.sh@69 -- # partprobe 00:07:12.428 02:04:26 -- target/filesystem.sh@70 -- # sleep 1 00:07:13.362 02:04:27 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:13.362 02:04:27 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:13.362 02:04:27 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:13.362 02:04:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:13.362 02:04:27 -- common/autotest_common.sh@10 -- # set +x 00:07:13.362 ************************************ 00:07:13.362 START TEST filesystem_ext4 00:07:13.362 ************************************ 00:07:13.362 02:04:27 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:13.362 02:04:27 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:13.362 02:04:27 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:13.362 02:04:27 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:13.362 02:04:27 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:07:13.362 02:04:27 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:13.362 02:04:27 -- common/autotest_common.sh@904 -- # local i=0 00:07:13.362 02:04:27 -- common/autotest_common.sh@905 -- # local force 00:07:13.362 02:04:27 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:07:13.362 02:04:27 -- common/autotest_common.sh@908 -- # force=-F 00:07:13.362 02:04:27 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:13.362 mke2fs 1.46.5 (30-Dec-2021) 00:07:13.362 Discarding device blocks: 0/522240 done 00:07:13.362 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:13.362 Filesystem UUID: 0751c7a7-d5db-47c7-9384-f506809ebe4a 00:07:13.362 Superblock backups stored on blocks: 00:07:13.362 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:13.362 00:07:13.362 Allocating group tables: 0/64 done 00:07:13.362 Writing inode tables: 0/64 done 00:07:13.362 Creating journal (8192 blocks): done 00:07:13.362 Writing superblocks and filesystem accounting information: 0/64 done 00:07:13.362 00:07:13.362 02:04:27 -- common/autotest_common.sh@921 -- # return 0 00:07:13.362 02:04:27 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:13.362 02:04:27 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:13.621 02:04:28 -- target/filesystem.sh@25 -- # sync 00:07:13.621 02:04:28 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:13.621 02:04:28 -- target/filesystem.sh@27 -- # sync 00:07:13.621 02:04:28 -- target/filesystem.sh@29 -- # i=0 00:07:13.621 02:04:28 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:13.621 02:04:28 -- target/filesystem.sh@37 -- # kill -0 60412 00:07:13.621 02:04:28 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:13.621 02:04:28 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:13.621 02:04:28 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:13.621 02:04:28 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:13.621 ************************************ 00:07:13.621 END TEST filesystem_ext4 00:07:13.621 ************************************ 00:07:13.621 00:07:13.621 real 0m0.352s 00:07:13.621 user 0m0.029s 00:07:13.621 sys 0m0.047s 00:07:13.621 02:04:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.621 02:04:28 -- common/autotest_common.sh@10 -- # set +x 00:07:13.621 02:04:28 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:13.622 02:04:28 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:13.622 02:04:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:13.622 02:04:28 -- common/autotest_common.sh@10 -- # set +x 00:07:13.622 ************************************ 00:07:13.622 START TEST filesystem_btrfs 00:07:13.622 ************************************ 00:07:13.622 02:04:28 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:13.622 02:04:28 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:13.622 02:04:28 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:13.622 02:04:28 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:13.622 02:04:28 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:07:13.622 02:04:28 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:13.622 02:04:28 -- common/autotest_common.sh@904 -- # local i=0 00:07:13.622 02:04:28 -- common/autotest_common.sh@905 -- # local force 00:07:13.622 02:04:28 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:07:13.622 02:04:28 -- common/autotest_common.sh@910 -- # force=-f 00:07:13.622 02:04:28 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:13.880 btrfs-progs v6.6.2 00:07:13.880 See https://btrfs.readthedocs.io for more information. 00:07:13.880 00:07:13.880 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:13.880 NOTE: several default settings have changed in version 5.15, please make sure 00:07:13.880 this does not affect your deployments: 00:07:13.880 - DUP for metadata (-m dup) 00:07:13.880 - enabled no-holes (-O no-holes) 00:07:13.880 - enabled free-space-tree (-R free-space-tree) 00:07:13.880 00:07:13.880 Label: (null) 00:07:13.880 UUID: c69064fd-68e0-4fad-9d88-3a371d55f1c7 00:07:13.880 Node size: 16384 00:07:13.880 Sector size: 4096 00:07:13.880 Filesystem size: 510.00MiB 00:07:13.880 Block group profiles: 00:07:13.880 Data: single 8.00MiB 00:07:13.880 Metadata: DUP 32.00MiB 00:07:13.880 System: DUP 8.00MiB 00:07:13.880 SSD detected: yes 00:07:13.880 Zoned device: no 00:07:13.880 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:13.880 Runtime features: free-space-tree 00:07:13.880 Checksum: crc32c 00:07:13.880 Number of devices: 1 00:07:13.880 Devices: 00:07:13.880 ID SIZE PATH 00:07:13.880 1 510.00MiB /dev/nvme0n1p1 00:07:13.880 00:07:13.880 02:04:28 -- common/autotest_common.sh@921 -- # return 0 00:07:13.880 02:04:28 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:13.880 02:04:28 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:13.880 02:04:28 -- target/filesystem.sh@25 -- # sync 00:07:13.880 02:04:28 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:14.138 02:04:28 -- target/filesystem.sh@27 -- # sync 00:07:14.138 02:04:28 -- target/filesystem.sh@29 -- # i=0 00:07:14.138 02:04:28 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:14.138 02:04:28 -- target/filesystem.sh@37 -- # kill -0 60412 00:07:14.138 02:04:28 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:14.138 02:04:28 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:14.138 02:04:28 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:14.138 02:04:28 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:14.138 ************************************ 00:07:14.138 END TEST filesystem_btrfs 00:07:14.138 ************************************ 00:07:14.138 00:07:14.138 real 0m0.335s 00:07:14.138 user 0m0.019s 00:07:14.138 sys 0m0.068s 00:07:14.138 02:04:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.138 02:04:28 -- common/autotest_common.sh@10 -- # set +x 00:07:14.138 02:04:28 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:14.138 02:04:28 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:14.138 02:04:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:14.138 02:04:28 -- common/autotest_common.sh@10 -- # set +x 00:07:14.138 ************************************ 00:07:14.138 START TEST filesystem_xfs 00:07:14.138 ************************************ 00:07:14.138 02:04:28 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:07:14.138 02:04:28 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:14.138 02:04:28 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:14.138 02:04:28 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:14.138 02:04:28 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:07:14.138 02:04:28 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:14.138 02:04:28 -- common/autotest_common.sh@904 -- # local i=0 00:07:14.138 02:04:28 -- common/autotest_common.sh@905 -- # local force 00:07:14.138 02:04:28 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:07:14.138 02:04:28 -- common/autotest_common.sh@910 -- # force=-f 00:07:14.138 02:04:28 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:14.138 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:14.138 = sectsz=512 attr=2, projid32bit=1 00:07:14.138 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:14.138 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:14.138 data = bsize=4096 blocks=130560, imaxpct=25 00:07:14.138 = sunit=0 swidth=0 blks 00:07:14.138 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:14.138 log =internal log bsize=4096 blocks=16384, version=2 00:07:14.138 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:14.138 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:15.069 Discarding blocks...Done. 00:07:15.069 02:04:29 -- common/autotest_common.sh@921 -- # return 0 00:07:15.069 02:04:29 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:17.593 02:04:31 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:17.593 02:04:31 -- target/filesystem.sh@25 -- # sync 00:07:17.593 02:04:31 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:17.593 02:04:31 -- target/filesystem.sh@27 -- # sync 00:07:17.593 02:04:31 -- target/filesystem.sh@29 -- # i=0 00:07:17.593 02:04:31 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:17.593 02:04:31 -- target/filesystem.sh@37 -- # kill -0 60412 00:07:17.593 02:04:31 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:17.593 02:04:31 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:17.593 02:04:31 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:17.593 02:04:31 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:17.593 ************************************ 00:07:17.593 END TEST filesystem_xfs 00:07:17.593 ************************************ 00:07:17.593 00:07:17.593 real 0m3.148s 00:07:17.593 user 0m0.021s 00:07:17.593 sys 0m0.047s 00:07:17.593 02:04:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.593 02:04:31 -- common/autotest_common.sh@10 -- # set +x 00:07:17.593 02:04:31 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:17.593 02:04:31 -- target/filesystem.sh@93 -- # sync 00:07:17.593 02:04:31 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:17.593 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:17.593 02:04:31 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:17.593 02:04:31 -- common/autotest_common.sh@1198 -- # local i=0 00:07:17.593 02:04:31 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:07:17.593 02:04:31 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:17.593 02:04:31 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:07:17.593 02:04:31 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:17.593 02:04:31 -- common/autotest_common.sh@1210 -- # return 0 00:07:17.593 02:04:31 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:17.593 02:04:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:17.593 02:04:31 -- common/autotest_common.sh@10 -- # set +x 00:07:17.593 02:04:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:17.593 02:04:31 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:17.593 02:04:31 -- target/filesystem.sh@101 -- # killprocess 60412 00:07:17.593 02:04:31 -- common/autotest_common.sh@926 -- # '[' -z 60412 ']' 00:07:17.593 02:04:31 -- common/autotest_common.sh@930 -- # kill -0 60412 00:07:17.593 02:04:31 -- common/autotest_common.sh@931 -- # uname 00:07:17.593 02:04:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:17.593 02:04:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 60412 00:07:17.593 02:04:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:17.593 02:04:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:17.593 killing process with pid 60412 00:07:17.593 02:04:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 60412' 00:07:17.593 02:04:31 -- common/autotest_common.sh@945 -- # kill 60412 00:07:17.593 02:04:31 -- common/autotest_common.sh@950 -- # wait 60412 00:07:17.593 02:04:32 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:17.593 00:07:17.593 real 0m9.086s 00:07:17.593 user 0m34.557s 00:07:17.593 sys 0m1.413s 00:07:17.593 02:04:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.593 02:04:32 -- common/autotest_common.sh@10 -- # set +x 00:07:17.593 ************************************ 00:07:17.593 END TEST nvmf_filesystem_no_in_capsule 00:07:17.593 ************************************ 00:07:17.852 02:04:32 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:17.852 02:04:32 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:17.852 02:04:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:17.852 02:04:32 -- common/autotest_common.sh@10 -- # set +x 00:07:17.852 ************************************ 00:07:17.852 START TEST nvmf_filesystem_in_capsule 00:07:17.852 ************************************ 00:07:17.852 02:04:32 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 4096 00:07:17.852 02:04:32 -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:17.852 02:04:32 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:17.852 02:04:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:17.852 02:04:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:17.852 02:04:32 -- common/autotest_common.sh@10 -- # set +x 00:07:17.852 02:04:32 -- nvmf/common.sh@469 -- # nvmfpid=60725 00:07:17.852 02:04:32 -- nvmf/common.sh@470 -- # waitforlisten 60725 00:07:17.852 02:04:32 -- common/autotest_common.sh@819 -- # '[' -z 60725 ']' 00:07:17.852 02:04:32 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:17.852 02:04:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.852 02:04:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:17.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.852 02:04:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.852 02:04:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:17.852 02:04:32 -- common/autotest_common.sh@10 -- # set +x 00:07:17.852 [2024-05-14 02:04:32.285458] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:17.852 [2024-05-14 02:04:32.285565] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:17.852 [2024-05-14 02:04:32.426974] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:18.110 [2024-05-14 02:04:32.497632] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:18.110 [2024-05-14 02:04:32.497811] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:18.110 [2024-05-14 02:04:32.497828] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:18.110 [2024-05-14 02:04:32.497838] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:18.110 [2024-05-14 02:04:32.497943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.110 [2024-05-14 02:04:32.498395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:18.110 [2024-05-14 02:04:32.498508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:18.110 [2024-05-14 02:04:32.498514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.677 02:04:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:18.677 02:04:33 -- common/autotest_common.sh@852 -- # return 0 00:07:18.677 02:04:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:18.677 02:04:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:18.677 02:04:33 -- common/autotest_common.sh@10 -- # set +x 00:07:18.934 02:04:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:18.934 02:04:33 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:18.934 02:04:33 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:18.934 02:04:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:18.934 02:04:33 -- common/autotest_common.sh@10 -- # set +x 00:07:18.934 [2024-05-14 02:04:33.271070] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:18.934 02:04:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:18.934 02:04:33 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:18.934 02:04:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:18.934 02:04:33 -- common/autotest_common.sh@10 -- # set +x 00:07:18.934 Malloc1 00:07:18.934 02:04:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:18.934 02:04:33 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:18.934 02:04:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:18.934 02:04:33 -- common/autotest_common.sh@10 -- # set +x 00:07:18.934 02:04:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:18.934 02:04:33 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:18.934 02:04:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:18.934 02:04:33 -- common/autotest_common.sh@10 -- # set +x 00:07:18.934 02:04:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:18.934 02:04:33 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:18.934 02:04:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:18.934 02:04:33 -- common/autotest_common.sh@10 -- # set +x 00:07:18.934 [2024-05-14 02:04:33.397867] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:18.934 02:04:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:18.934 02:04:33 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:18.934 02:04:33 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:07:18.934 02:04:33 -- common/autotest_common.sh@1358 -- # local bdev_info 00:07:18.934 02:04:33 -- common/autotest_common.sh@1359 -- # local bs 00:07:18.934 02:04:33 -- common/autotest_common.sh@1360 -- # local nb 00:07:18.934 02:04:33 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:18.935 02:04:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:18.935 02:04:33 -- common/autotest_common.sh@10 -- # set +x 00:07:18.935 02:04:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:18.935 02:04:33 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:07:18.935 { 00:07:18.935 "aliases": [ 00:07:18.935 "46e07337-5ead-4c4d-97b0-9f39a96a7bfd" 00:07:18.935 ], 00:07:18.935 "assigned_rate_limits": { 00:07:18.935 "r_mbytes_per_sec": 0, 00:07:18.935 "rw_ios_per_sec": 0, 00:07:18.935 "rw_mbytes_per_sec": 0, 00:07:18.935 "w_mbytes_per_sec": 0 00:07:18.935 }, 00:07:18.935 "block_size": 512, 00:07:18.935 "claim_type": "exclusive_write", 00:07:18.935 "claimed": true, 00:07:18.935 "driver_specific": {}, 00:07:18.935 "memory_domains": [ 00:07:18.935 { 00:07:18.935 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:18.935 "dma_device_type": 2 00:07:18.935 } 00:07:18.935 ], 00:07:18.935 "name": "Malloc1", 00:07:18.935 "num_blocks": 1048576, 00:07:18.935 "product_name": "Malloc disk", 00:07:18.935 "supported_io_types": { 00:07:18.935 "abort": true, 00:07:18.935 "compare": false, 00:07:18.935 "compare_and_write": false, 00:07:18.935 "flush": true, 00:07:18.935 "nvme_admin": false, 00:07:18.935 "nvme_io": false, 00:07:18.935 "read": true, 00:07:18.935 "reset": true, 00:07:18.935 "unmap": true, 00:07:18.935 "write": true, 00:07:18.935 "write_zeroes": true 00:07:18.935 }, 00:07:18.935 "uuid": "46e07337-5ead-4c4d-97b0-9f39a96a7bfd", 00:07:18.935 "zoned": false 00:07:18.935 } 00:07:18.935 ]' 00:07:18.935 02:04:33 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:07:18.935 02:04:33 -- common/autotest_common.sh@1362 -- # bs=512 00:07:18.935 02:04:33 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:07:19.192 02:04:33 -- common/autotest_common.sh@1363 -- # nb=1048576 00:07:19.192 02:04:33 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:07:19.192 02:04:33 -- common/autotest_common.sh@1367 -- # echo 512 00:07:19.192 02:04:33 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:19.192 02:04:33 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 --hostid=01bebc16-ee64-4b1b-82ac-462e1640a9a9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:19.192 02:04:33 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:19.192 02:04:33 -- common/autotest_common.sh@1177 -- # local i=0 00:07:19.192 02:04:33 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:07:19.192 02:04:33 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:07:19.192 02:04:33 -- common/autotest_common.sh@1184 -- # sleep 2 00:07:21.721 02:04:35 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:07:21.721 02:04:35 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:07:21.721 02:04:35 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:07:21.721 02:04:35 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:07:21.721 02:04:35 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:07:21.721 02:04:35 -- common/autotest_common.sh@1187 -- # return 0 00:07:21.721 02:04:35 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:21.721 02:04:35 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:21.721 02:04:35 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:21.721 02:04:35 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:21.721 02:04:35 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:21.721 02:04:35 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:21.721 02:04:35 -- setup/common.sh@80 -- # echo 536870912 00:07:21.721 02:04:35 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:21.721 02:04:35 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:21.721 02:04:35 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:21.721 02:04:35 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:21.721 02:04:35 -- target/filesystem.sh@69 -- # partprobe 00:07:21.721 02:04:35 -- target/filesystem.sh@70 -- # sleep 1 00:07:22.287 02:04:36 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:22.287 02:04:36 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:22.287 02:04:36 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:22.287 02:04:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:22.287 02:04:36 -- common/autotest_common.sh@10 -- # set +x 00:07:22.287 ************************************ 00:07:22.287 START TEST filesystem_in_capsule_ext4 00:07:22.287 ************************************ 00:07:22.287 02:04:36 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:22.287 02:04:36 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:22.287 02:04:36 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:22.287 02:04:36 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:22.288 02:04:36 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:07:22.288 02:04:36 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:22.288 02:04:36 -- common/autotest_common.sh@904 -- # local i=0 00:07:22.288 02:04:36 -- common/autotest_common.sh@905 -- # local force 00:07:22.288 02:04:36 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:07:22.288 02:04:36 -- common/autotest_common.sh@908 -- # force=-F 00:07:22.288 02:04:36 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:22.288 mke2fs 1.46.5 (30-Dec-2021) 00:07:22.603 Discarding device blocks: 0/522240 done 00:07:22.603 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:22.603 Filesystem UUID: 5e9248b3-06bb-4413-aa39-2678a7454e3b 00:07:22.603 Superblock backups stored on blocks: 00:07:22.603 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:22.603 00:07:22.603 Allocating group tables: 0/64 done 00:07:22.603 Writing inode tables: 0/64 done 00:07:22.603 Creating journal (8192 blocks): done 00:07:22.603 Writing superblocks and filesystem accounting information: 0/64 done 00:07:22.603 00:07:22.603 02:04:36 -- common/autotest_common.sh@921 -- # return 0 00:07:22.603 02:04:36 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:22.603 02:04:37 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:22.603 02:04:37 -- target/filesystem.sh@25 -- # sync 00:07:22.603 02:04:37 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:22.603 02:04:37 -- target/filesystem.sh@27 -- # sync 00:07:22.603 02:04:37 -- target/filesystem.sh@29 -- # i=0 00:07:22.603 02:04:37 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:22.603 02:04:37 -- target/filesystem.sh@37 -- # kill -0 60725 00:07:22.603 02:04:37 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:22.603 02:04:37 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:22.603 02:04:37 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:22.603 02:04:37 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:22.603 ************************************ 00:07:22.603 END TEST filesystem_in_capsule_ext4 00:07:22.603 ************************************ 00:07:22.603 00:07:22.603 real 0m0.292s 00:07:22.603 user 0m0.023s 00:07:22.603 sys 0m0.043s 00:07:22.603 02:04:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.603 02:04:37 -- common/autotest_common.sh@10 -- # set +x 00:07:22.861 02:04:37 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:22.861 02:04:37 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:22.861 02:04:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:22.861 02:04:37 -- common/autotest_common.sh@10 -- # set +x 00:07:22.861 ************************************ 00:07:22.861 START TEST filesystem_in_capsule_btrfs 00:07:22.861 ************************************ 00:07:22.861 02:04:37 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:22.861 02:04:37 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:22.861 02:04:37 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:22.861 02:04:37 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:22.861 02:04:37 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:07:22.861 02:04:37 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:22.861 02:04:37 -- common/autotest_common.sh@904 -- # local i=0 00:07:22.861 02:04:37 -- common/autotest_common.sh@905 -- # local force 00:07:22.861 02:04:37 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:07:22.861 02:04:37 -- common/autotest_common.sh@910 -- # force=-f 00:07:22.861 02:04:37 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:22.861 btrfs-progs v6.6.2 00:07:22.861 See https://btrfs.readthedocs.io for more information. 00:07:22.861 00:07:22.861 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:22.861 NOTE: several default settings have changed in version 5.15, please make sure 00:07:22.861 this does not affect your deployments: 00:07:22.861 - DUP for metadata (-m dup) 00:07:22.861 - enabled no-holes (-O no-holes) 00:07:22.861 - enabled free-space-tree (-R free-space-tree) 00:07:22.861 00:07:22.861 Label: (null) 00:07:22.861 UUID: 8fb15bd6-947b-4697-9392-2695d3ebc534 00:07:22.861 Node size: 16384 00:07:22.861 Sector size: 4096 00:07:22.861 Filesystem size: 510.00MiB 00:07:22.861 Block group profiles: 00:07:22.861 Data: single 8.00MiB 00:07:22.861 Metadata: DUP 32.00MiB 00:07:22.861 System: DUP 8.00MiB 00:07:22.861 SSD detected: yes 00:07:22.861 Zoned device: no 00:07:22.861 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:22.861 Runtime features: free-space-tree 00:07:22.861 Checksum: crc32c 00:07:22.861 Number of devices: 1 00:07:22.861 Devices: 00:07:22.861 ID SIZE PATH 00:07:22.861 1 510.00MiB /dev/nvme0n1p1 00:07:22.861 00:07:22.861 02:04:37 -- common/autotest_common.sh@921 -- # return 0 00:07:22.861 02:04:37 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:22.861 02:04:37 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:22.861 02:04:37 -- target/filesystem.sh@25 -- # sync 00:07:22.861 02:04:37 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:22.861 02:04:37 -- target/filesystem.sh@27 -- # sync 00:07:22.861 02:04:37 -- target/filesystem.sh@29 -- # i=0 00:07:22.861 02:04:37 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:22.861 02:04:37 -- target/filesystem.sh@37 -- # kill -0 60725 00:07:22.861 02:04:37 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:22.861 02:04:37 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:22.861 02:04:37 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:22.861 02:04:37 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:22.861 ************************************ 00:07:22.861 END TEST filesystem_in_capsule_btrfs 00:07:22.861 ************************************ 00:07:22.861 00:07:22.861 real 0m0.178s 00:07:22.861 user 0m0.016s 00:07:22.861 sys 0m0.063s 00:07:22.861 02:04:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.861 02:04:37 -- common/autotest_common.sh@10 -- # set +x 00:07:22.861 02:04:37 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:22.861 02:04:37 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:22.861 02:04:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:22.861 02:04:37 -- common/autotest_common.sh@10 -- # set +x 00:07:22.861 ************************************ 00:07:22.861 START TEST filesystem_in_capsule_xfs 00:07:22.861 ************************************ 00:07:22.861 02:04:37 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:07:22.861 02:04:37 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:22.861 02:04:37 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:22.861 02:04:37 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:22.861 02:04:37 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:07:22.861 02:04:37 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:22.861 02:04:37 -- common/autotest_common.sh@904 -- # local i=0 00:07:22.861 02:04:37 -- common/autotest_common.sh@905 -- # local force 00:07:22.861 02:04:37 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:07:22.861 02:04:37 -- common/autotest_common.sh@910 -- # force=-f 00:07:22.861 02:04:37 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:23.119 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:23.119 = sectsz=512 attr=2, projid32bit=1 00:07:23.119 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:23.119 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:23.119 data = bsize=4096 blocks=130560, imaxpct=25 00:07:23.119 = sunit=0 swidth=0 blks 00:07:23.119 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:23.119 log =internal log bsize=4096 blocks=16384, version=2 00:07:23.119 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:23.119 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:23.685 Discarding blocks...Done. 00:07:23.685 02:04:38 -- common/autotest_common.sh@921 -- # return 0 00:07:23.685 02:04:38 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:25.583 02:04:39 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:25.583 02:04:39 -- target/filesystem.sh@25 -- # sync 00:07:25.583 02:04:39 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:25.583 02:04:39 -- target/filesystem.sh@27 -- # sync 00:07:25.583 02:04:39 -- target/filesystem.sh@29 -- # i=0 00:07:25.583 02:04:39 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:25.583 02:04:39 -- target/filesystem.sh@37 -- # kill -0 60725 00:07:25.583 02:04:39 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:25.583 02:04:39 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:25.583 02:04:39 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:25.583 02:04:39 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:25.583 ************************************ 00:07:25.583 END TEST filesystem_in_capsule_xfs 00:07:25.583 ************************************ 00:07:25.583 00:07:25.583 real 0m2.542s 00:07:25.583 user 0m0.018s 00:07:25.583 sys 0m0.046s 00:07:25.583 02:04:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.583 02:04:39 -- common/autotest_common.sh@10 -- # set +x 00:07:25.583 02:04:39 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:25.583 02:04:40 -- target/filesystem.sh@93 -- # sync 00:07:25.583 02:04:40 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:25.583 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:25.583 02:04:40 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:25.583 02:04:40 -- common/autotest_common.sh@1198 -- # local i=0 00:07:25.583 02:04:40 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:25.583 02:04:40 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:07:25.583 02:04:40 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:07:25.583 02:04:40 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:25.583 02:04:40 -- common/autotest_common.sh@1210 -- # return 0 00:07:25.583 02:04:40 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:25.583 02:04:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:25.583 02:04:40 -- common/autotest_common.sh@10 -- # set +x 00:07:25.583 02:04:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:25.583 02:04:40 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:25.583 02:04:40 -- target/filesystem.sh@101 -- # killprocess 60725 00:07:25.583 02:04:40 -- common/autotest_common.sh@926 -- # '[' -z 60725 ']' 00:07:25.583 02:04:40 -- common/autotest_common.sh@930 -- # kill -0 60725 00:07:25.583 02:04:40 -- common/autotest_common.sh@931 -- # uname 00:07:25.583 02:04:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:25.584 02:04:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 60725 00:07:25.584 02:04:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:25.584 02:04:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:25.584 killing process with pid 60725 00:07:25.584 02:04:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 60725' 00:07:25.584 02:04:40 -- common/autotest_common.sh@945 -- # kill 60725 00:07:25.584 02:04:40 -- common/autotest_common.sh@950 -- # wait 60725 00:07:25.842 02:04:40 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:25.842 00:07:25.842 real 0m8.184s 00:07:25.842 user 0m31.042s 00:07:25.842 sys 0m1.383s 00:07:25.842 02:04:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.842 02:04:40 -- common/autotest_common.sh@10 -- # set +x 00:07:25.842 ************************************ 00:07:25.842 END TEST nvmf_filesystem_in_capsule 00:07:25.842 ************************************ 00:07:26.100 02:04:40 -- target/filesystem.sh@108 -- # nvmftestfini 00:07:26.100 02:04:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:26.100 02:04:40 -- nvmf/common.sh@116 -- # sync 00:07:26.100 02:04:40 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:26.100 02:04:40 -- nvmf/common.sh@119 -- # set +e 00:07:26.101 02:04:40 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:26.101 02:04:40 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:26.101 rmmod nvme_tcp 00:07:26.101 rmmod nvme_fabrics 00:07:26.101 rmmod nvme_keyring 00:07:26.101 02:04:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:26.101 02:04:40 -- nvmf/common.sh@123 -- # set -e 00:07:26.101 02:04:40 -- nvmf/common.sh@124 -- # return 0 00:07:26.101 02:04:40 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:07:26.101 02:04:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:26.101 02:04:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:26.101 02:04:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:26.101 02:04:40 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:26.101 02:04:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:26.101 02:04:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:26.101 02:04:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:26.101 02:04:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:26.101 02:04:40 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:07:26.101 00:07:26.101 real 0m18.008s 00:07:26.101 user 1m5.796s 00:07:26.101 sys 0m3.139s 00:07:26.101 02:04:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.101 ************************************ 00:07:26.101 END TEST nvmf_filesystem 00:07:26.101 02:04:40 -- common/autotest_common.sh@10 -- # set +x 00:07:26.101 ************************************ 00:07:26.101 02:04:40 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:26.101 02:04:40 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:26.101 02:04:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:26.101 02:04:40 -- common/autotest_common.sh@10 -- # set +x 00:07:26.101 ************************************ 00:07:26.101 START TEST nvmf_discovery 00:07:26.101 ************************************ 00:07:26.101 02:04:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:26.101 * Looking for test storage... 00:07:26.101 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:26.101 02:04:40 -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:26.101 02:04:40 -- nvmf/common.sh@7 -- # uname -s 00:07:26.101 02:04:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:26.101 02:04:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:26.101 02:04:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:26.101 02:04:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:26.101 02:04:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:26.101 02:04:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:26.101 02:04:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:26.101 02:04:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:26.101 02:04:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:26.101 02:04:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:26.101 02:04:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:07:26.101 02:04:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:07:26.101 02:04:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:26.101 02:04:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:26.101 02:04:40 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:26.101 02:04:40 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:26.101 02:04:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:26.101 02:04:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:26.101 02:04:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:26.101 02:04:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.101 02:04:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.101 02:04:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.101 02:04:40 -- paths/export.sh@5 -- # export PATH 00:07:26.101 02:04:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.101 02:04:40 -- nvmf/common.sh@46 -- # : 0 00:07:26.101 02:04:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:26.101 02:04:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:26.101 02:04:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:26.101 02:04:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:26.101 02:04:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:26.101 02:04:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:26.101 02:04:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:26.101 02:04:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:26.359 02:04:40 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:26.359 02:04:40 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:26.359 02:04:40 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:26.359 02:04:40 -- target/discovery.sh@15 -- # hash nvme 00:07:26.359 02:04:40 -- target/discovery.sh@20 -- # nvmftestinit 00:07:26.359 02:04:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:26.359 02:04:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:26.359 02:04:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:26.359 02:04:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:26.359 02:04:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:26.359 02:04:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:26.359 02:04:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:26.359 02:04:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:26.359 02:04:40 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:07:26.359 02:04:40 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:07:26.359 02:04:40 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:07:26.359 02:04:40 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:07:26.359 02:04:40 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:07:26.359 02:04:40 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:07:26.359 02:04:40 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:26.359 02:04:40 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:26.359 02:04:40 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:26.359 02:04:40 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:07:26.359 02:04:40 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:26.359 02:04:40 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:26.359 02:04:40 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:26.359 02:04:40 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:26.359 02:04:40 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:26.359 02:04:40 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:26.359 02:04:40 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:26.359 02:04:40 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:26.359 02:04:40 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:07:26.359 02:04:40 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:07:26.359 Cannot find device "nvmf_tgt_br" 00:07:26.359 02:04:40 -- nvmf/common.sh@154 -- # true 00:07:26.359 02:04:40 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:07:26.359 Cannot find device "nvmf_tgt_br2" 00:07:26.359 02:04:40 -- nvmf/common.sh@155 -- # true 00:07:26.359 02:04:40 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:07:26.359 02:04:40 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:07:26.359 Cannot find device "nvmf_tgt_br" 00:07:26.359 02:04:40 -- nvmf/common.sh@157 -- # true 00:07:26.359 02:04:40 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:07:26.359 Cannot find device "nvmf_tgt_br2" 00:07:26.359 02:04:40 -- nvmf/common.sh@158 -- # true 00:07:26.359 02:04:40 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:07:26.359 02:04:40 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:07:26.359 02:04:40 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:26.359 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:26.359 02:04:40 -- nvmf/common.sh@161 -- # true 00:07:26.359 02:04:40 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:26.359 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:26.359 02:04:40 -- nvmf/common.sh@162 -- # true 00:07:26.359 02:04:40 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:07:26.359 02:04:40 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:26.359 02:04:40 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:26.359 02:04:40 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:26.359 02:04:40 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:26.359 02:04:40 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:26.359 02:04:40 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:26.617 02:04:40 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:26.617 02:04:40 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:26.617 02:04:40 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:07:26.617 02:04:40 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:07:26.617 02:04:40 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:07:26.617 02:04:40 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:07:26.617 02:04:40 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:26.617 02:04:40 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:26.617 02:04:40 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:26.617 02:04:41 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:07:26.617 02:04:41 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:07:26.617 02:04:41 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:07:26.617 02:04:41 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:26.617 02:04:41 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:26.617 02:04:41 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:26.617 02:04:41 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:26.617 02:04:41 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:07:26.617 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:26.617 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.110 ms 00:07:26.617 00:07:26.617 --- 10.0.0.2 ping statistics --- 00:07:26.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:26.617 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:07:26.617 02:04:41 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:07:26.617 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:26.617 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:07:26.617 00:07:26.617 --- 10.0.0.3 ping statistics --- 00:07:26.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:26.617 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:07:26.617 02:04:41 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:26.617 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:26.617 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:07:26.617 00:07:26.617 --- 10.0.0.1 ping statistics --- 00:07:26.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:26.617 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:07:26.617 02:04:41 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:26.617 02:04:41 -- nvmf/common.sh@421 -- # return 0 00:07:26.617 02:04:41 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:26.617 02:04:41 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:26.617 02:04:41 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:26.617 02:04:41 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:26.617 02:04:41 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:26.617 02:04:41 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:26.617 02:04:41 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:26.617 02:04:41 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:26.617 02:04:41 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:26.617 02:04:41 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:26.617 02:04:41 -- common/autotest_common.sh@10 -- # set +x 00:07:26.617 02:04:41 -- nvmf/common.sh@469 -- # nvmfpid=61174 00:07:26.617 02:04:41 -- nvmf/common.sh@470 -- # waitforlisten 61174 00:07:26.617 02:04:41 -- common/autotest_common.sh@819 -- # '[' -z 61174 ']' 00:07:26.617 02:04:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.617 02:04:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:26.617 02:04:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.617 02:04:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:26.617 02:04:41 -- common/autotest_common.sh@10 -- # set +x 00:07:26.617 02:04:41 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:26.617 [2024-05-14 02:04:41.155029] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:26.617 [2024-05-14 02:04:41.155105] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:26.875 [2024-05-14 02:04:41.289261] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:26.875 [2024-05-14 02:04:41.348504] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:26.875 [2024-05-14 02:04:41.348645] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:26.875 [2024-05-14 02:04:41.348659] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:26.875 [2024-05-14 02:04:41.348668] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:26.875 [2024-05-14 02:04:41.348760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:26.875 [2024-05-14 02:04:41.348889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.875 [2024-05-14 02:04:41.348814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:26.875 [2024-05-14 02:04:41.348876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:27.839 02:04:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:27.839 02:04:42 -- common/autotest_common.sh@852 -- # return 0 00:07:27.839 02:04:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:27.839 02:04:42 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:27.839 02:04:42 -- common/autotest_common.sh@10 -- # set +x 00:07:27.839 02:04:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:27.839 02:04:42 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:27.839 02:04:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:27.839 02:04:42 -- common/autotest_common.sh@10 -- # set +x 00:07:27.839 [2024-05-14 02:04:42.316802] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:27.839 02:04:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:27.839 02:04:42 -- target/discovery.sh@26 -- # seq 1 4 00:07:27.839 02:04:42 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:27.839 02:04:42 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:27.839 02:04:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:27.839 02:04:42 -- common/autotest_common.sh@10 -- # set +x 00:07:27.839 Null1 00:07:27.839 02:04:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:27.839 02:04:42 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:27.839 02:04:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:27.839 02:04:42 -- common/autotest_common.sh@10 -- # set +x 00:07:27.839 02:04:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:27.839 02:04:42 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:27.839 02:04:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:27.839 02:04:42 -- common/autotest_common.sh@10 -- # set +x 00:07:27.839 02:04:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:27.839 02:04:42 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:27.839 02:04:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:27.839 02:04:42 -- common/autotest_common.sh@10 -- # set +x 00:07:27.839 [2024-05-14 02:04:42.365949] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:27.839 02:04:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:27.839 02:04:42 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:27.839 02:04:42 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:27.839 02:04:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:27.839 02:04:42 -- common/autotest_common.sh@10 -- # set +x 00:07:27.839 Null2 00:07:27.839 02:04:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:27.839 02:04:42 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:27.839 02:04:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:27.839 02:04:42 -- common/autotest_common.sh@10 -- # set +x 00:07:27.839 02:04:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:27.839 02:04:42 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:27.839 02:04:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:27.839 02:04:42 -- common/autotest_common.sh@10 -- # set +x 00:07:27.839 02:04:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:27.839 02:04:42 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:27.839 02:04:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:27.839 02:04:42 -- common/autotest_common.sh@10 -- # set +x 00:07:27.839 02:04:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:27.839 02:04:42 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:27.839 02:04:42 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:27.839 02:04:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:27.839 02:04:42 -- common/autotest_common.sh@10 -- # set +x 00:07:27.839 Null3 00:07:27.839 02:04:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:27.839 02:04:42 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:27.839 02:04:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:27.839 02:04:42 -- common/autotest_common.sh@10 -- # set +x 00:07:27.839 02:04:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:27.839 02:04:42 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:27.839 02:04:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:27.839 02:04:42 -- common/autotest_common.sh@10 -- # set +x 00:07:28.098 02:04:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:28.098 02:04:42 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:28.098 02:04:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:28.098 02:04:42 -- common/autotest_common.sh@10 -- # set +x 00:07:28.098 02:04:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:28.098 02:04:42 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:28.098 02:04:42 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:28.098 02:04:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:28.098 02:04:42 -- common/autotest_common.sh@10 -- # set +x 00:07:28.098 Null4 00:07:28.098 02:04:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:28.098 02:04:42 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:28.098 02:04:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:28.098 02:04:42 -- common/autotest_common.sh@10 -- # set +x 00:07:28.098 02:04:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:28.098 02:04:42 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:28.098 02:04:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:28.098 02:04:42 -- common/autotest_common.sh@10 -- # set +x 00:07:28.098 02:04:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:28.098 02:04:42 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:28.098 02:04:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:28.098 02:04:42 -- common/autotest_common.sh@10 -- # set +x 00:07:28.098 02:04:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:28.098 02:04:42 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:28.098 02:04:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:28.098 02:04:42 -- common/autotest_common.sh@10 -- # set +x 00:07:28.098 02:04:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:28.098 02:04:42 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:28.098 02:04:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:28.098 02:04:42 -- common/autotest_common.sh@10 -- # set +x 00:07:28.098 02:04:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:28.098 02:04:42 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 --hostid=01bebc16-ee64-4b1b-82ac-462e1640a9a9 -t tcp -a 10.0.0.2 -s 4420 00:07:28.098 00:07:28.098 Discovery Log Number of Records 6, Generation counter 6 00:07:28.098 =====Discovery Log Entry 0====== 00:07:28.098 trtype: tcp 00:07:28.098 adrfam: ipv4 00:07:28.098 subtype: current discovery subsystem 00:07:28.098 treq: not required 00:07:28.098 portid: 0 00:07:28.098 trsvcid: 4420 00:07:28.098 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:28.098 traddr: 10.0.0.2 00:07:28.098 eflags: explicit discovery connections, duplicate discovery information 00:07:28.098 sectype: none 00:07:28.098 =====Discovery Log Entry 1====== 00:07:28.098 trtype: tcp 00:07:28.098 adrfam: ipv4 00:07:28.098 subtype: nvme subsystem 00:07:28.098 treq: not required 00:07:28.098 portid: 0 00:07:28.098 trsvcid: 4420 00:07:28.098 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:28.098 traddr: 10.0.0.2 00:07:28.098 eflags: none 00:07:28.098 sectype: none 00:07:28.098 =====Discovery Log Entry 2====== 00:07:28.098 trtype: tcp 00:07:28.098 adrfam: ipv4 00:07:28.098 subtype: nvme subsystem 00:07:28.098 treq: not required 00:07:28.098 portid: 0 00:07:28.098 trsvcid: 4420 00:07:28.098 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:28.098 traddr: 10.0.0.2 00:07:28.098 eflags: none 00:07:28.098 sectype: none 00:07:28.098 =====Discovery Log Entry 3====== 00:07:28.098 trtype: tcp 00:07:28.098 adrfam: ipv4 00:07:28.098 subtype: nvme subsystem 00:07:28.098 treq: not required 00:07:28.098 portid: 0 00:07:28.098 trsvcid: 4420 00:07:28.098 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:28.098 traddr: 10.0.0.2 00:07:28.098 eflags: none 00:07:28.098 sectype: none 00:07:28.098 =====Discovery Log Entry 4====== 00:07:28.098 trtype: tcp 00:07:28.098 adrfam: ipv4 00:07:28.098 subtype: nvme subsystem 00:07:28.098 treq: not required 00:07:28.098 portid: 0 00:07:28.098 trsvcid: 4420 00:07:28.098 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:28.098 traddr: 10.0.0.2 00:07:28.098 eflags: none 00:07:28.098 sectype: none 00:07:28.098 =====Discovery Log Entry 5====== 00:07:28.098 trtype: tcp 00:07:28.098 adrfam: ipv4 00:07:28.098 subtype: discovery subsystem referral 00:07:28.098 treq: not required 00:07:28.098 portid: 0 00:07:28.098 trsvcid: 4430 00:07:28.098 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:28.098 traddr: 10.0.0.2 00:07:28.098 eflags: none 00:07:28.098 sectype: none 00:07:28.098 Perform nvmf subsystem discovery via RPC 00:07:28.098 02:04:42 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:28.098 02:04:42 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:28.098 02:04:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:28.098 02:04:42 -- common/autotest_common.sh@10 -- # set +x 00:07:28.098 [2024-05-14 02:04:42.557902] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:07:28.098 [ 00:07:28.098 { 00:07:28.098 "allow_any_host": true, 00:07:28.098 "hosts": [], 00:07:28.098 "listen_addresses": [ 00:07:28.098 { 00:07:28.098 "adrfam": "IPv4", 00:07:28.098 "traddr": "10.0.0.2", 00:07:28.098 "transport": "TCP", 00:07:28.098 "trsvcid": "4420", 00:07:28.098 "trtype": "TCP" 00:07:28.098 } 00:07:28.098 ], 00:07:28.098 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:28.098 "subtype": "Discovery" 00:07:28.098 }, 00:07:28.098 { 00:07:28.098 "allow_any_host": true, 00:07:28.098 "hosts": [], 00:07:28.098 "listen_addresses": [ 00:07:28.098 { 00:07:28.098 "adrfam": "IPv4", 00:07:28.098 "traddr": "10.0.0.2", 00:07:28.098 "transport": "TCP", 00:07:28.098 "trsvcid": "4420", 00:07:28.098 "trtype": "TCP" 00:07:28.098 } 00:07:28.098 ], 00:07:28.098 "max_cntlid": 65519, 00:07:28.098 "max_namespaces": 32, 00:07:28.098 "min_cntlid": 1, 00:07:28.098 "model_number": "SPDK bdev Controller", 00:07:28.098 "namespaces": [ 00:07:28.098 { 00:07:28.098 "bdev_name": "Null1", 00:07:28.098 "name": "Null1", 00:07:28.098 "nguid": "A916BC3A452943A0816DBA54C8B7170D", 00:07:28.098 "nsid": 1, 00:07:28.098 "uuid": "a916bc3a-4529-43a0-816d-ba54c8b7170d" 00:07:28.098 } 00:07:28.098 ], 00:07:28.098 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:28.098 "serial_number": "SPDK00000000000001", 00:07:28.098 "subtype": "NVMe" 00:07:28.098 }, 00:07:28.098 { 00:07:28.098 "allow_any_host": true, 00:07:28.098 "hosts": [], 00:07:28.098 "listen_addresses": [ 00:07:28.098 { 00:07:28.098 "adrfam": "IPv4", 00:07:28.098 "traddr": "10.0.0.2", 00:07:28.098 "transport": "TCP", 00:07:28.098 "trsvcid": "4420", 00:07:28.098 "trtype": "TCP" 00:07:28.098 } 00:07:28.098 ], 00:07:28.098 "max_cntlid": 65519, 00:07:28.098 "max_namespaces": 32, 00:07:28.098 "min_cntlid": 1, 00:07:28.098 "model_number": "SPDK bdev Controller", 00:07:28.098 "namespaces": [ 00:07:28.098 { 00:07:28.098 "bdev_name": "Null2", 00:07:28.098 "name": "Null2", 00:07:28.098 "nguid": "8B24F0A294564F90A3A1BBE2D755B448", 00:07:28.098 "nsid": 1, 00:07:28.098 "uuid": "8b24f0a2-9456-4f90-a3a1-bbe2d755b448" 00:07:28.098 } 00:07:28.098 ], 00:07:28.098 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:28.098 "serial_number": "SPDK00000000000002", 00:07:28.098 "subtype": "NVMe" 00:07:28.098 }, 00:07:28.098 { 00:07:28.098 "allow_any_host": true, 00:07:28.098 "hosts": [], 00:07:28.098 "listen_addresses": [ 00:07:28.098 { 00:07:28.098 "adrfam": "IPv4", 00:07:28.098 "traddr": "10.0.0.2", 00:07:28.098 "transport": "TCP", 00:07:28.098 "trsvcid": "4420", 00:07:28.098 "trtype": "TCP" 00:07:28.098 } 00:07:28.098 ], 00:07:28.098 "max_cntlid": 65519, 00:07:28.098 "max_namespaces": 32, 00:07:28.098 "min_cntlid": 1, 00:07:28.098 "model_number": "SPDK bdev Controller", 00:07:28.098 "namespaces": [ 00:07:28.098 { 00:07:28.098 "bdev_name": "Null3", 00:07:28.098 "name": "Null3", 00:07:28.098 "nguid": "D6BB4BF800BD47918B1CDC4E695DC55F", 00:07:28.098 "nsid": 1, 00:07:28.098 "uuid": "d6bb4bf8-00bd-4791-8b1c-dc4e695dc55f" 00:07:28.098 } 00:07:28.099 ], 00:07:28.099 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:28.099 "serial_number": "SPDK00000000000003", 00:07:28.099 "subtype": "NVMe" 00:07:28.099 }, 00:07:28.099 { 00:07:28.099 "allow_any_host": true, 00:07:28.099 "hosts": [], 00:07:28.099 "listen_addresses": [ 00:07:28.099 { 00:07:28.099 "adrfam": "IPv4", 00:07:28.099 "traddr": "10.0.0.2", 00:07:28.099 "transport": "TCP", 00:07:28.099 "trsvcid": "4420", 00:07:28.099 "trtype": "TCP" 00:07:28.099 } 00:07:28.099 ], 00:07:28.099 "max_cntlid": 65519, 00:07:28.099 "max_namespaces": 32, 00:07:28.099 "min_cntlid": 1, 00:07:28.099 "model_number": "SPDK bdev Controller", 00:07:28.099 "namespaces": [ 00:07:28.099 { 00:07:28.099 "bdev_name": "Null4", 00:07:28.099 "name": "Null4", 00:07:28.099 "nguid": "F3115472D43E4A8A9CBBD9ADBD89E2FC", 00:07:28.099 "nsid": 1, 00:07:28.099 "uuid": "f3115472-d43e-4a8a-9cbb-d9adbd89e2fc" 00:07:28.099 } 00:07:28.099 ], 00:07:28.099 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:28.099 "serial_number": "SPDK00000000000004", 00:07:28.099 "subtype": "NVMe" 00:07:28.099 } 00:07:28.099 ] 00:07:28.099 02:04:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:28.099 02:04:42 -- target/discovery.sh@42 -- # seq 1 4 00:07:28.099 02:04:42 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:28.099 02:04:42 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:28.099 02:04:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:28.099 02:04:42 -- common/autotest_common.sh@10 -- # set +x 00:07:28.099 02:04:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:28.099 02:04:42 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:28.099 02:04:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:28.099 02:04:42 -- common/autotest_common.sh@10 -- # set +x 00:07:28.099 02:04:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:28.099 02:04:42 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:28.099 02:04:42 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:28.099 02:04:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:28.099 02:04:42 -- common/autotest_common.sh@10 -- # set +x 00:07:28.099 02:04:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:28.099 02:04:42 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:28.099 02:04:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:28.099 02:04:42 -- common/autotest_common.sh@10 -- # set +x 00:07:28.099 02:04:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:28.099 02:04:42 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:28.099 02:04:42 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:28.099 02:04:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:28.099 02:04:42 -- common/autotest_common.sh@10 -- # set +x 00:07:28.099 02:04:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:28.099 02:04:42 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:28.099 02:04:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:28.099 02:04:42 -- common/autotest_common.sh@10 -- # set +x 00:07:28.099 02:04:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:28.099 02:04:42 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:28.099 02:04:42 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:28.099 02:04:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:28.099 02:04:42 -- common/autotest_common.sh@10 -- # set +x 00:07:28.099 02:04:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:28.099 02:04:42 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:28.099 02:04:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:28.099 02:04:42 -- common/autotest_common.sh@10 -- # set +x 00:07:28.099 02:04:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:28.099 02:04:42 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:28.099 02:04:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:28.099 02:04:42 -- common/autotest_common.sh@10 -- # set +x 00:07:28.099 02:04:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:28.099 02:04:42 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:28.099 02:04:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:28.099 02:04:42 -- common/autotest_common.sh@10 -- # set +x 00:07:28.099 02:04:42 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:28.099 02:04:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:28.357 02:04:42 -- target/discovery.sh@49 -- # check_bdevs= 00:07:28.357 02:04:42 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:28.357 02:04:42 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:28.357 02:04:42 -- target/discovery.sh@57 -- # nvmftestfini 00:07:28.357 02:04:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:28.357 02:04:42 -- nvmf/common.sh@116 -- # sync 00:07:28.357 02:04:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:28.357 02:04:42 -- nvmf/common.sh@119 -- # set +e 00:07:28.357 02:04:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:28.357 02:04:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:28.357 rmmod nvme_tcp 00:07:28.357 rmmod nvme_fabrics 00:07:28.357 rmmod nvme_keyring 00:07:28.357 02:04:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:28.357 02:04:42 -- nvmf/common.sh@123 -- # set -e 00:07:28.357 02:04:42 -- nvmf/common.sh@124 -- # return 0 00:07:28.357 02:04:42 -- nvmf/common.sh@477 -- # '[' -n 61174 ']' 00:07:28.357 02:04:42 -- nvmf/common.sh@478 -- # killprocess 61174 00:07:28.357 02:04:42 -- common/autotest_common.sh@926 -- # '[' -z 61174 ']' 00:07:28.357 02:04:42 -- common/autotest_common.sh@930 -- # kill -0 61174 00:07:28.357 02:04:42 -- common/autotest_common.sh@931 -- # uname 00:07:28.357 02:04:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:28.357 02:04:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 61174 00:07:28.357 killing process with pid 61174 00:07:28.357 02:04:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:28.357 02:04:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:28.357 02:04:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 61174' 00:07:28.357 02:04:42 -- common/autotest_common.sh@945 -- # kill 61174 00:07:28.357 [2024-05-14 02:04:42.823960] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:07:28.357 02:04:42 -- common/autotest_common.sh@950 -- # wait 61174 00:07:28.616 02:04:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:28.616 02:04:43 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:28.616 02:04:43 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:28.616 02:04:43 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:28.616 02:04:43 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:28.616 02:04:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:28.616 02:04:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:28.616 02:04:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:28.616 02:04:43 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:07:28.617 00:07:28.617 real 0m2.459s 00:07:28.617 user 0m7.069s 00:07:28.617 sys 0m0.517s 00:07:28.617 02:04:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.617 02:04:43 -- common/autotest_common.sh@10 -- # set +x 00:07:28.617 ************************************ 00:07:28.617 END TEST nvmf_discovery 00:07:28.617 ************************************ 00:07:28.617 02:04:43 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:28.617 02:04:43 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:28.617 02:04:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:28.617 02:04:43 -- common/autotest_common.sh@10 -- # set +x 00:07:28.617 ************************************ 00:07:28.617 START TEST nvmf_referrals 00:07:28.617 ************************************ 00:07:28.617 02:04:43 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:28.617 * Looking for test storage... 00:07:28.617 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:28.617 02:04:43 -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:28.617 02:04:43 -- nvmf/common.sh@7 -- # uname -s 00:07:28.617 02:04:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:28.617 02:04:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:28.617 02:04:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:28.617 02:04:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:28.617 02:04:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:28.617 02:04:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:28.617 02:04:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:28.617 02:04:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:28.617 02:04:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:28.617 02:04:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:28.617 02:04:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:07:28.617 02:04:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:07:28.617 02:04:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:28.617 02:04:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:28.617 02:04:43 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:28.617 02:04:43 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:28.617 02:04:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:28.617 02:04:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:28.617 02:04:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:28.617 02:04:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.617 02:04:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.617 02:04:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.617 02:04:43 -- paths/export.sh@5 -- # export PATH 00:07:28.617 02:04:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.617 02:04:43 -- nvmf/common.sh@46 -- # : 0 00:07:28.617 02:04:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:28.617 02:04:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:28.617 02:04:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:28.617 02:04:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:28.617 02:04:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:28.617 02:04:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:28.617 02:04:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:28.617 02:04:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:28.617 02:04:43 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:28.617 02:04:43 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:28.617 02:04:43 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:28.617 02:04:43 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:28.617 02:04:43 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:28.617 02:04:43 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:28.617 02:04:43 -- target/referrals.sh@37 -- # nvmftestinit 00:07:28.617 02:04:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:28.617 02:04:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:28.617 02:04:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:28.617 02:04:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:28.617 02:04:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:28.617 02:04:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:28.617 02:04:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:28.617 02:04:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:28.875 02:04:43 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:07:28.875 02:04:43 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:07:28.875 02:04:43 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:07:28.875 02:04:43 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:07:28.875 02:04:43 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:07:28.875 02:04:43 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:07:28.875 02:04:43 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:28.875 02:04:43 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:28.875 02:04:43 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:28.875 02:04:43 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:07:28.875 02:04:43 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:28.875 02:04:43 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:28.875 02:04:43 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:28.875 02:04:43 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:28.875 02:04:43 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:28.875 02:04:43 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:28.875 02:04:43 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:28.875 02:04:43 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:28.875 02:04:43 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:07:28.875 02:04:43 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:07:28.875 Cannot find device "nvmf_tgt_br" 00:07:28.875 02:04:43 -- nvmf/common.sh@154 -- # true 00:07:28.875 02:04:43 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:07:28.875 Cannot find device "nvmf_tgt_br2" 00:07:28.875 02:04:43 -- nvmf/common.sh@155 -- # true 00:07:28.875 02:04:43 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:07:28.875 02:04:43 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:07:28.875 Cannot find device "nvmf_tgt_br" 00:07:28.875 02:04:43 -- nvmf/common.sh@157 -- # true 00:07:28.875 02:04:43 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:07:28.875 Cannot find device "nvmf_tgt_br2" 00:07:28.875 02:04:43 -- nvmf/common.sh@158 -- # true 00:07:28.875 02:04:43 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:07:28.875 02:04:43 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:07:28.875 02:04:43 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:28.875 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:28.875 02:04:43 -- nvmf/common.sh@161 -- # true 00:07:28.875 02:04:43 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:28.875 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:28.875 02:04:43 -- nvmf/common.sh@162 -- # true 00:07:28.875 02:04:43 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:07:28.875 02:04:43 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:28.875 02:04:43 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:28.875 02:04:43 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:28.875 02:04:43 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:28.875 02:04:43 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:28.875 02:04:43 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:28.875 02:04:43 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:28.875 02:04:43 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:28.875 02:04:43 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:07:28.875 02:04:43 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:07:28.875 02:04:43 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:07:28.875 02:04:43 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:07:28.875 02:04:43 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:28.875 02:04:43 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:28.875 02:04:43 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:28.875 02:04:43 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:07:29.132 02:04:43 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:07:29.132 02:04:43 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:07:29.132 02:04:43 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:29.132 02:04:43 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:29.132 02:04:43 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:29.132 02:04:43 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:29.132 02:04:43 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:07:29.132 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:29.132 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:07:29.132 00:07:29.132 --- 10.0.0.2 ping statistics --- 00:07:29.132 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:29.132 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:07:29.132 02:04:43 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:07:29.132 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:29.132 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:07:29.133 00:07:29.133 --- 10.0.0.3 ping statistics --- 00:07:29.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:29.133 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:07:29.133 02:04:43 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:29.133 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:29.133 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:07:29.133 00:07:29.133 --- 10.0.0.1 ping statistics --- 00:07:29.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:29.133 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:07:29.133 02:04:43 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:29.133 02:04:43 -- nvmf/common.sh@421 -- # return 0 00:07:29.133 02:04:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:29.133 02:04:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:29.133 02:04:43 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:29.133 02:04:43 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:29.133 02:04:43 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:29.133 02:04:43 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:29.133 02:04:43 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:29.133 02:04:43 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:29.133 02:04:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:29.133 02:04:43 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:29.133 02:04:43 -- common/autotest_common.sh@10 -- # set +x 00:07:29.133 02:04:43 -- nvmf/common.sh@469 -- # nvmfpid=61396 00:07:29.133 02:04:43 -- nvmf/common.sh@470 -- # waitforlisten 61396 00:07:29.133 02:04:43 -- common/autotest_common.sh@819 -- # '[' -z 61396 ']' 00:07:29.133 02:04:43 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:29.133 02:04:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.133 02:04:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:29.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.133 02:04:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.133 02:04:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:29.133 02:04:43 -- common/autotest_common.sh@10 -- # set +x 00:07:29.133 [2024-05-14 02:04:43.616170] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:29.133 [2024-05-14 02:04:43.616257] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:29.390 [2024-05-14 02:04:43.752074] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:29.390 [2024-05-14 02:04:43.814900] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:29.390 [2024-05-14 02:04:43.815066] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:29.390 [2024-05-14 02:04:43.815083] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:29.390 [2024-05-14 02:04:43.815094] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:29.390 [2024-05-14 02:04:43.815194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:29.390 [2024-05-14 02:04:43.815271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:29.390 [2024-05-14 02:04:43.815334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:29.390 [2024-05-14 02:04:43.815339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.323 02:04:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:30.323 02:04:44 -- common/autotest_common.sh@852 -- # return 0 00:07:30.323 02:04:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:30.323 02:04:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:30.323 02:04:44 -- common/autotest_common.sh@10 -- # set +x 00:07:30.323 02:04:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:30.323 02:04:44 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:30.323 02:04:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:30.323 02:04:44 -- common/autotest_common.sh@10 -- # set +x 00:07:30.323 [2024-05-14 02:04:44.754037] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:30.323 02:04:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:30.323 02:04:44 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:07:30.323 02:04:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:30.323 02:04:44 -- common/autotest_common.sh@10 -- # set +x 00:07:30.323 [2024-05-14 02:04:44.780216] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:07:30.323 02:04:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:30.323 02:04:44 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:07:30.323 02:04:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:30.323 02:04:44 -- common/autotest_common.sh@10 -- # set +x 00:07:30.323 02:04:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:30.323 02:04:44 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:07:30.323 02:04:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:30.323 02:04:44 -- common/autotest_common.sh@10 -- # set +x 00:07:30.323 02:04:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:30.323 02:04:44 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:07:30.323 02:04:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:30.323 02:04:44 -- common/autotest_common.sh@10 -- # set +x 00:07:30.323 02:04:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:30.323 02:04:44 -- target/referrals.sh@48 -- # jq length 00:07:30.323 02:04:44 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:30.323 02:04:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:30.323 02:04:44 -- common/autotest_common.sh@10 -- # set +x 00:07:30.323 02:04:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:30.323 02:04:44 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:30.323 02:04:44 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:30.323 02:04:44 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:30.323 02:04:44 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:30.323 02:04:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:30.324 02:04:44 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:30.324 02:04:44 -- common/autotest_common.sh@10 -- # set +x 00:07:30.324 02:04:44 -- target/referrals.sh@21 -- # sort 00:07:30.324 02:04:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:30.582 02:04:44 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:30.582 02:04:44 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:30.582 02:04:44 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:30.582 02:04:44 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:30.582 02:04:44 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:30.582 02:04:44 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 --hostid=01bebc16-ee64-4b1b-82ac-462e1640a9a9 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:30.582 02:04:44 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:30.582 02:04:44 -- target/referrals.sh@26 -- # sort 00:07:30.582 02:04:45 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:30.582 02:04:45 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:30.582 02:04:45 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:07:30.582 02:04:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:30.582 02:04:45 -- common/autotest_common.sh@10 -- # set +x 00:07:30.582 02:04:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:30.582 02:04:45 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:07:30.582 02:04:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:30.582 02:04:45 -- common/autotest_common.sh@10 -- # set +x 00:07:30.582 02:04:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:30.582 02:04:45 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:07:30.582 02:04:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:30.582 02:04:45 -- common/autotest_common.sh@10 -- # set +x 00:07:30.582 02:04:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:30.582 02:04:45 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:30.582 02:04:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:30.582 02:04:45 -- target/referrals.sh@56 -- # jq length 00:07:30.582 02:04:45 -- common/autotest_common.sh@10 -- # set +x 00:07:30.582 02:04:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:30.582 02:04:45 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:30.582 02:04:45 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:30.582 02:04:45 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:30.582 02:04:45 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:30.582 02:04:45 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 --hostid=01bebc16-ee64-4b1b-82ac-462e1640a9a9 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:30.582 02:04:45 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:30.582 02:04:45 -- target/referrals.sh@26 -- # sort 00:07:30.839 02:04:45 -- target/referrals.sh@26 -- # echo 00:07:30.839 02:04:45 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:30.839 02:04:45 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:07:30.839 02:04:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:30.839 02:04:45 -- common/autotest_common.sh@10 -- # set +x 00:07:30.839 02:04:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:30.839 02:04:45 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:30.839 02:04:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:30.839 02:04:45 -- common/autotest_common.sh@10 -- # set +x 00:07:30.839 02:04:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:30.839 02:04:45 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:30.839 02:04:45 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:30.839 02:04:45 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:30.839 02:04:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:30.839 02:04:45 -- common/autotest_common.sh@10 -- # set +x 00:07:30.839 02:04:45 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:30.839 02:04:45 -- target/referrals.sh@21 -- # sort 00:07:30.839 02:04:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:30.839 02:04:45 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:30.839 02:04:45 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:30.839 02:04:45 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:30.839 02:04:45 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:30.839 02:04:45 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:30.839 02:04:45 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 --hostid=01bebc16-ee64-4b1b-82ac-462e1640a9a9 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:30.839 02:04:45 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:30.839 02:04:45 -- target/referrals.sh@26 -- # sort 00:07:30.839 02:04:45 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:30.839 02:04:45 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:30.839 02:04:45 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:30.839 02:04:45 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:30.839 02:04:45 -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:30.839 02:04:45 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 --hostid=01bebc16-ee64-4b1b-82ac-462e1640a9a9 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:30.839 02:04:45 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:30.839 02:04:45 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:30.839 02:04:45 -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:30.839 02:04:45 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:30.839 02:04:45 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:30.839 02:04:45 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 --hostid=01bebc16-ee64-4b1b-82ac-462e1640a9a9 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:30.839 02:04:45 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:31.097 02:04:45 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:31.097 02:04:45 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:31.097 02:04:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:31.097 02:04:45 -- common/autotest_common.sh@10 -- # set +x 00:07:31.097 02:04:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:31.097 02:04:45 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:31.097 02:04:45 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:31.097 02:04:45 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:31.097 02:04:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:31.097 02:04:45 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:31.097 02:04:45 -- common/autotest_common.sh@10 -- # set +x 00:07:31.097 02:04:45 -- target/referrals.sh@21 -- # sort 00:07:31.097 02:04:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:31.097 02:04:45 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:31.097 02:04:45 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:31.097 02:04:45 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:31.097 02:04:45 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:31.097 02:04:45 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:31.097 02:04:45 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:31.097 02:04:45 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 --hostid=01bebc16-ee64-4b1b-82ac-462e1640a9a9 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:31.097 02:04:45 -- target/referrals.sh@26 -- # sort 00:07:31.097 02:04:45 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:31.097 02:04:45 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:31.097 02:04:45 -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:31.097 02:04:45 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:31.097 02:04:45 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:31.097 02:04:45 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 --hostid=01bebc16-ee64-4b1b-82ac-462e1640a9a9 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:31.097 02:04:45 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:31.355 02:04:45 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:31.355 02:04:45 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:31.355 02:04:45 -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:31.355 02:04:45 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:31.355 02:04:45 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 --hostid=01bebc16-ee64-4b1b-82ac-462e1640a9a9 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:31.355 02:04:45 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:31.355 02:04:45 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:31.355 02:04:45 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:31.355 02:04:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:31.355 02:04:45 -- common/autotest_common.sh@10 -- # set +x 00:07:31.355 02:04:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:31.355 02:04:45 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:31.355 02:04:45 -- target/referrals.sh@82 -- # jq length 00:07:31.355 02:04:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:31.355 02:04:45 -- common/autotest_common.sh@10 -- # set +x 00:07:31.355 02:04:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:31.355 02:04:45 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:31.355 02:04:45 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:31.355 02:04:45 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:31.355 02:04:45 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:31.355 02:04:45 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:31.355 02:04:45 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 --hostid=01bebc16-ee64-4b1b-82ac-462e1640a9a9 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:31.355 02:04:45 -- target/referrals.sh@26 -- # sort 00:07:31.355 02:04:45 -- target/referrals.sh@26 -- # echo 00:07:31.355 02:04:45 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:31.355 02:04:45 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:31.355 02:04:45 -- target/referrals.sh@86 -- # nvmftestfini 00:07:31.355 02:04:45 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:31.355 02:04:45 -- nvmf/common.sh@116 -- # sync 00:07:31.613 02:04:45 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:31.613 02:04:45 -- nvmf/common.sh@119 -- # set +e 00:07:31.613 02:04:45 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:31.613 02:04:45 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:31.613 rmmod nvme_tcp 00:07:31.613 rmmod nvme_fabrics 00:07:31.613 rmmod nvme_keyring 00:07:31.613 02:04:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:31.613 02:04:46 -- nvmf/common.sh@123 -- # set -e 00:07:31.613 02:04:46 -- nvmf/common.sh@124 -- # return 0 00:07:31.613 02:04:46 -- nvmf/common.sh@477 -- # '[' -n 61396 ']' 00:07:31.613 02:04:46 -- nvmf/common.sh@478 -- # killprocess 61396 00:07:31.613 02:04:46 -- common/autotest_common.sh@926 -- # '[' -z 61396 ']' 00:07:31.613 02:04:46 -- common/autotest_common.sh@930 -- # kill -0 61396 00:07:31.613 02:04:46 -- common/autotest_common.sh@931 -- # uname 00:07:31.613 02:04:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:31.613 02:04:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 61396 00:07:31.613 02:04:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:31.613 02:04:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:31.613 02:04:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 61396' 00:07:31.613 killing process with pid 61396 00:07:31.613 02:04:46 -- common/autotest_common.sh@945 -- # kill 61396 00:07:31.613 02:04:46 -- common/autotest_common.sh@950 -- # wait 61396 00:07:31.899 02:04:46 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:31.899 02:04:46 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:31.899 02:04:46 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:31.899 02:04:46 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:31.899 02:04:46 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:31.899 02:04:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:31.899 02:04:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:31.899 02:04:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:31.899 02:04:46 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:07:31.899 00:07:31.899 real 0m3.174s 00:07:31.899 user 0m10.888s 00:07:31.899 sys 0m0.672s 00:07:31.899 02:04:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.899 02:04:46 -- common/autotest_common.sh@10 -- # set +x 00:07:31.899 ************************************ 00:07:31.899 END TEST nvmf_referrals 00:07:31.899 ************************************ 00:07:31.899 02:04:46 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:31.899 02:04:46 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:31.899 02:04:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:31.899 02:04:46 -- common/autotest_common.sh@10 -- # set +x 00:07:31.899 ************************************ 00:07:31.899 START TEST nvmf_connect_disconnect 00:07:31.899 ************************************ 00:07:31.899 02:04:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:31.899 * Looking for test storage... 00:07:31.899 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:31.899 02:04:46 -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:31.899 02:04:46 -- nvmf/common.sh@7 -- # uname -s 00:07:31.899 02:04:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:31.899 02:04:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:31.899 02:04:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:31.899 02:04:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:31.899 02:04:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:31.899 02:04:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:31.899 02:04:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:31.899 02:04:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:31.899 02:04:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:31.899 02:04:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:31.899 02:04:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:07:31.899 02:04:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:07:31.899 02:04:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:31.899 02:04:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:31.899 02:04:46 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:31.899 02:04:46 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:31.899 02:04:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:31.899 02:04:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:31.899 02:04:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:31.900 02:04:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.900 02:04:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.900 02:04:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.900 02:04:46 -- paths/export.sh@5 -- # export PATH 00:07:31.900 02:04:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.900 02:04:46 -- nvmf/common.sh@46 -- # : 0 00:07:31.900 02:04:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:31.900 02:04:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:31.900 02:04:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:31.900 02:04:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:31.900 02:04:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:31.900 02:04:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:31.900 02:04:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:31.900 02:04:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:31.900 02:04:46 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:31.900 02:04:46 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:31.900 02:04:46 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:31.900 02:04:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:31.900 02:04:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:31.900 02:04:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:31.900 02:04:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:31.900 02:04:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:31.900 02:04:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:31.900 02:04:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:31.900 02:04:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:31.900 02:04:46 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:07:31.900 02:04:46 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:07:31.900 02:04:46 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:07:31.900 02:04:46 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:07:31.900 02:04:46 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:07:31.900 02:04:46 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:07:31.900 02:04:46 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:31.900 02:04:46 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:31.900 02:04:46 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:31.900 02:04:46 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:07:31.900 02:04:46 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:31.900 02:04:46 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:31.900 02:04:46 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:31.900 02:04:46 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:31.900 02:04:46 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:31.900 02:04:46 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:31.900 02:04:46 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:31.900 02:04:46 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:31.900 02:04:46 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:07:31.900 02:04:46 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:07:31.900 Cannot find device "nvmf_tgt_br" 00:07:31.900 02:04:46 -- nvmf/common.sh@154 -- # true 00:07:31.900 02:04:46 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:07:31.900 Cannot find device "nvmf_tgt_br2" 00:07:31.900 02:04:46 -- nvmf/common.sh@155 -- # true 00:07:31.900 02:04:46 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:07:31.900 02:04:46 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:07:32.167 Cannot find device "nvmf_tgt_br" 00:07:32.167 02:04:46 -- nvmf/common.sh@157 -- # true 00:07:32.167 02:04:46 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:07:32.167 Cannot find device "nvmf_tgt_br2" 00:07:32.167 02:04:46 -- nvmf/common.sh@158 -- # true 00:07:32.167 02:04:46 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:07:32.167 02:04:46 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:07:32.167 02:04:46 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:32.167 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:32.167 02:04:46 -- nvmf/common.sh@161 -- # true 00:07:32.167 02:04:46 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:32.167 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:32.167 02:04:46 -- nvmf/common.sh@162 -- # true 00:07:32.168 02:04:46 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:07:32.168 02:04:46 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:32.168 02:04:46 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:32.168 02:04:46 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:32.168 02:04:46 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:32.168 02:04:46 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:32.168 02:04:46 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:32.168 02:04:46 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:32.168 02:04:46 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:32.168 02:04:46 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:07:32.168 02:04:46 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:07:32.168 02:04:46 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:07:32.168 02:04:46 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:07:32.168 02:04:46 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:32.168 02:04:46 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:32.168 02:04:46 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:32.168 02:04:46 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:07:32.168 02:04:46 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:07:32.168 02:04:46 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:07:32.168 02:04:46 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:32.168 02:04:46 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:32.168 02:04:46 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:32.168 02:04:46 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:32.168 02:04:46 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:07:32.168 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:32.168 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:07:32.168 00:07:32.168 --- 10.0.0.2 ping statistics --- 00:07:32.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:32.168 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:07:32.168 02:04:46 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:07:32.168 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:32.168 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:07:32.168 00:07:32.168 --- 10.0.0.3 ping statistics --- 00:07:32.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:32.168 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:07:32.168 02:04:46 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:32.168 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:32.168 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:07:32.168 00:07:32.168 --- 10.0.0.1 ping statistics --- 00:07:32.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:32.168 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:07:32.168 02:04:46 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:32.168 02:04:46 -- nvmf/common.sh@421 -- # return 0 00:07:32.168 02:04:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:32.168 02:04:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:32.168 02:04:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:32.168 02:04:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:32.168 02:04:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:32.168 02:04:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:32.168 02:04:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:32.425 02:04:46 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:07:32.425 02:04:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:32.425 02:04:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:32.425 02:04:46 -- common/autotest_common.sh@10 -- # set +x 00:07:32.425 02:04:46 -- nvmf/common.sh@469 -- # nvmfpid=61697 00:07:32.425 02:04:46 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:32.425 02:04:46 -- nvmf/common.sh@470 -- # waitforlisten 61697 00:07:32.425 02:04:46 -- common/autotest_common.sh@819 -- # '[' -z 61697 ']' 00:07:32.425 02:04:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.425 02:04:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:32.425 02:04:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.425 02:04:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:32.425 02:04:46 -- common/autotest_common.sh@10 -- # set +x 00:07:32.425 [2024-05-14 02:04:46.819564] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:32.425 [2024-05-14 02:04:46.819650] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:32.425 [2024-05-14 02:04:46.970286] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:32.683 [2024-05-14 02:04:47.049594] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:32.683 [2024-05-14 02:04:47.049779] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:32.683 [2024-05-14 02:04:47.049796] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:32.683 [2024-05-14 02:04:47.049808] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:32.683 [2024-05-14 02:04:47.049930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:32.683 [2024-05-14 02:04:47.049979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:32.683 [2024-05-14 02:04:47.053281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:32.683 [2024-05-14 02:04:47.053304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.683 02:04:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:32.683 02:04:47 -- common/autotest_common.sh@852 -- # return 0 00:07:32.683 02:04:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:32.683 02:04:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:32.683 02:04:47 -- common/autotest_common.sh@10 -- # set +x 00:07:32.683 02:04:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:32.683 02:04:47 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:32.683 02:04:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:32.683 02:04:47 -- common/autotest_common.sh@10 -- # set +x 00:07:32.683 [2024-05-14 02:04:47.170107] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:32.683 02:04:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:32.683 02:04:47 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:07:32.683 02:04:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:32.683 02:04:47 -- common/autotest_common.sh@10 -- # set +x 00:07:32.683 02:04:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:32.683 02:04:47 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:07:32.683 02:04:47 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:32.683 02:04:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:32.683 02:04:47 -- common/autotest_common.sh@10 -- # set +x 00:07:32.683 02:04:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:32.683 02:04:47 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:32.683 02:04:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:32.683 02:04:47 -- common/autotest_common.sh@10 -- # set +x 00:07:32.683 02:04:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:32.683 02:04:47 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:32.683 02:04:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:32.683 02:04:47 -- common/autotest_common.sh@10 -- # set +x 00:07:32.683 [2024-05-14 02:04:47.238488] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:32.683 02:04:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:32.683 02:04:47 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:07:32.683 02:04:47 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:07:32.683 02:04:47 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:07:32.683 02:04:47 -- target/connect_disconnect.sh@34 -- # set +x 00:07:35.205 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:37.731 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:39.674 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:41.573 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:44.103 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:46.028 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:48.556 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:50.494 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:53.019 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:54.919 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:57.447 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:59.345 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:01.872 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:03.771 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:06.299 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:08.201 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:10.758 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:12.659 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:15.187 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:17.088 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:19.617 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:21.515 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:24.071 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:25.967 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:28.495 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:30.399 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:32.928 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:34.888 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:37.417 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:39.318 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:41.846 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:43.745 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:46.273 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:48.174 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:50.704 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:52.601 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:55.133 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:57.108 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.641 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:01.556 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:04.081 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:06.006 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:07.905 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:10.460 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:12.384 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.913 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:16.855 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.385 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:21.285 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.820 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:25.724 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.257 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.158 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.687 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:34.613 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.147 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.045 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.568 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:43.467 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:45.997 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:47.896 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.427 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.393 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:54.922 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:56.823 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.351 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:01.250 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:03.781 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:05.680 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.207 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.107 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:12.639 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:14.543 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.074 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.975 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:21.502 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:23.404 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.931 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.831 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.356 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:32.259 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.788 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.688 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.218 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.117 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.644 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.541 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.154 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.053 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.582 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.482 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.012 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.965 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.515 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.416 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.947 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.849 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.379 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.280 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.811 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.811 02:08:28 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:14.811 02:08:28 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:14.811 02:08:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:14.811 02:08:28 -- nvmf/common.sh@116 -- # sync 00:11:14.811 02:08:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:14.811 02:08:28 -- nvmf/common.sh@119 -- # set +e 00:11:14.811 02:08:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:14.811 02:08:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:14.811 rmmod nvme_tcp 00:11:14.811 rmmod nvme_fabrics 00:11:14.811 rmmod nvme_keyring 00:11:14.811 02:08:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:14.811 02:08:28 -- nvmf/common.sh@123 -- # set -e 00:11:14.811 02:08:28 -- nvmf/common.sh@124 -- # return 0 00:11:14.811 02:08:28 -- nvmf/common.sh@477 -- # '[' -n 61697 ']' 00:11:14.811 02:08:28 -- nvmf/common.sh@478 -- # killprocess 61697 00:11:14.811 02:08:28 -- common/autotest_common.sh@926 -- # '[' -z 61697 ']' 00:11:14.811 02:08:28 -- common/autotest_common.sh@930 -- # kill -0 61697 00:11:14.811 02:08:28 -- common/autotest_common.sh@931 -- # uname 00:11:14.811 02:08:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:14.811 02:08:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 61697 00:11:14.811 02:08:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:14.811 02:08:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:14.811 killing process with pid 61697 00:11:14.811 02:08:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 61697' 00:11:14.811 02:08:29 -- common/autotest_common.sh@945 -- # kill 61697 00:11:14.811 02:08:29 -- common/autotest_common.sh@950 -- # wait 61697 00:11:14.811 02:08:29 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:14.811 02:08:29 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:14.811 02:08:29 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:14.811 02:08:29 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:14.811 02:08:29 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:14.811 02:08:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:14.811 02:08:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:14.811 02:08:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:14.811 02:08:29 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:14.811 00:11:14.811 real 3m42.914s 00:11:14.811 user 14m25.500s 00:11:14.811 sys 0m25.729s 00:11:14.811 02:08:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:14.811 ************************************ 00:11:14.811 END TEST nvmf_connect_disconnect 00:11:14.811 ************************************ 00:11:14.811 02:08:29 -- common/autotest_common.sh@10 -- # set +x 00:11:14.811 02:08:29 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:14.811 02:08:29 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:11:14.811 02:08:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:14.811 02:08:29 -- common/autotest_common.sh@10 -- # set +x 00:11:14.811 ************************************ 00:11:14.811 START TEST nvmf_multitarget 00:11:14.811 ************************************ 00:11:14.811 02:08:29 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:14.811 * Looking for test storage... 00:11:14.811 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:14.811 02:08:29 -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:14.811 02:08:29 -- nvmf/common.sh@7 -- # uname -s 00:11:14.811 02:08:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:14.811 02:08:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:14.811 02:08:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:14.811 02:08:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:14.811 02:08:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:14.811 02:08:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:14.811 02:08:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:14.811 02:08:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:14.811 02:08:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:14.811 02:08:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:14.811 02:08:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:11:14.811 02:08:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:11:14.811 02:08:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:14.811 02:08:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:14.811 02:08:29 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:14.811 02:08:29 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:14.811 02:08:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:14.811 02:08:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:14.811 02:08:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:14.811 02:08:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.811 02:08:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.811 02:08:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.811 02:08:29 -- paths/export.sh@5 -- # export PATH 00:11:14.811 02:08:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.811 02:08:29 -- nvmf/common.sh@46 -- # : 0 00:11:14.811 02:08:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:14.811 02:08:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:14.811 02:08:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:14.811 02:08:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:14.811 02:08:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:14.811 02:08:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:14.811 02:08:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:14.811 02:08:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:14.811 02:08:29 -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:11:14.812 02:08:29 -- target/multitarget.sh@15 -- # nvmftestinit 00:11:14.812 02:08:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:14.812 02:08:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:14.812 02:08:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:14.812 02:08:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:14.812 02:08:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:14.812 02:08:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:14.812 02:08:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:14.812 02:08:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:14.812 02:08:29 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:14.812 02:08:29 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:14.812 02:08:29 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:14.812 02:08:29 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:14.812 02:08:29 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:14.812 02:08:29 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:14.812 02:08:29 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:14.812 02:08:29 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:14.812 02:08:29 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:14.812 02:08:29 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:14.812 02:08:29 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:14.812 02:08:29 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:14.812 02:08:29 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:14.812 02:08:29 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:14.812 02:08:29 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:14.812 02:08:29 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:14.812 02:08:29 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:14.812 02:08:29 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:14.812 02:08:29 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:15.070 02:08:29 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:15.071 Cannot find device "nvmf_tgt_br" 00:11:15.071 02:08:29 -- nvmf/common.sh@154 -- # true 00:11:15.071 02:08:29 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:15.071 Cannot find device "nvmf_tgt_br2" 00:11:15.071 02:08:29 -- nvmf/common.sh@155 -- # true 00:11:15.071 02:08:29 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:15.071 02:08:29 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:15.071 Cannot find device "nvmf_tgt_br" 00:11:15.071 02:08:29 -- nvmf/common.sh@157 -- # true 00:11:15.071 02:08:29 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:15.071 Cannot find device "nvmf_tgt_br2" 00:11:15.071 02:08:29 -- nvmf/common.sh@158 -- # true 00:11:15.071 02:08:29 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:15.071 02:08:29 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:15.071 02:08:29 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:15.071 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:15.071 02:08:29 -- nvmf/common.sh@161 -- # true 00:11:15.071 02:08:29 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:15.071 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:15.071 02:08:29 -- nvmf/common.sh@162 -- # true 00:11:15.071 02:08:29 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:15.071 02:08:29 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:15.071 02:08:29 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:15.071 02:08:29 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:15.071 02:08:29 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:15.071 02:08:29 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:15.071 02:08:29 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:15.071 02:08:29 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:15.071 02:08:29 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:15.071 02:08:29 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:15.330 02:08:29 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:15.330 02:08:29 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:15.330 02:08:29 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:15.330 02:08:29 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:15.330 02:08:29 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:15.330 02:08:29 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:15.330 02:08:29 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:15.330 02:08:29 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:15.330 02:08:29 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:15.330 02:08:29 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:15.330 02:08:29 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:15.330 02:08:29 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:15.330 02:08:29 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:15.330 02:08:29 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:15.330 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:15.330 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:11:15.330 00:11:15.330 --- 10.0.0.2 ping statistics --- 00:11:15.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:15.330 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:11:15.330 02:08:29 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:15.330 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:15.330 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:11:15.330 00:11:15.330 --- 10.0.0.3 ping statistics --- 00:11:15.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:15.330 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:11:15.330 02:08:29 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:15.330 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:15.330 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:11:15.330 00:11:15.330 --- 10.0.0.1 ping statistics --- 00:11:15.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:15.330 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:11:15.330 02:08:29 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:15.330 02:08:29 -- nvmf/common.sh@421 -- # return 0 00:11:15.330 02:08:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:15.330 02:08:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:15.330 02:08:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:15.330 02:08:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:15.330 02:08:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:15.330 02:08:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:15.330 02:08:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:15.330 02:08:29 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:15.330 02:08:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:15.330 02:08:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:15.330 02:08:29 -- common/autotest_common.sh@10 -- # set +x 00:11:15.330 02:08:29 -- nvmf/common.sh@469 -- # nvmfpid=65436 00:11:15.330 02:08:29 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:15.330 02:08:29 -- nvmf/common.sh@470 -- # waitforlisten 65436 00:11:15.330 02:08:29 -- common/autotest_common.sh@819 -- # '[' -z 65436 ']' 00:11:15.330 02:08:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:15.330 02:08:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:15.330 02:08:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:15.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:15.330 02:08:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:15.330 02:08:29 -- common/autotest_common.sh@10 -- # set +x 00:11:15.330 [2024-05-14 02:08:29.835010] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:15.330 [2024-05-14 02:08:29.835108] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:15.601 [2024-05-14 02:08:29.973557] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:15.601 [2024-05-14 02:08:30.057785] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:15.601 [2024-05-14 02:08:30.057924] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:15.601 [2024-05-14 02:08:30.057939] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:15.601 [2024-05-14 02:08:30.057948] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:15.601 [2024-05-14 02:08:30.058030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:15.601 [2024-05-14 02:08:30.058329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:15.601 [2024-05-14 02:08:30.058408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:15.601 [2024-05-14 02:08:30.058411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.536 02:08:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:16.536 02:08:30 -- common/autotest_common.sh@852 -- # return 0 00:11:16.536 02:08:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:16.536 02:08:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:16.536 02:08:30 -- common/autotest_common.sh@10 -- # set +x 00:11:16.536 02:08:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:16.536 02:08:30 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:16.536 02:08:30 -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:16.536 02:08:30 -- target/multitarget.sh@21 -- # jq length 00:11:16.536 02:08:31 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:16.536 02:08:31 -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:16.794 "nvmf_tgt_1" 00:11:16.794 02:08:31 -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:16.794 "nvmf_tgt_2" 00:11:16.794 02:08:31 -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:16.794 02:08:31 -- target/multitarget.sh@28 -- # jq length 00:11:17.053 02:08:31 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:17.053 02:08:31 -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:17.053 true 00:11:17.053 02:08:31 -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:17.311 true 00:11:17.311 02:08:31 -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:17.311 02:08:31 -- target/multitarget.sh@35 -- # jq length 00:11:17.311 02:08:31 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:17.311 02:08:31 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:17.311 02:08:31 -- target/multitarget.sh@41 -- # nvmftestfini 00:11:17.311 02:08:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:17.311 02:08:31 -- nvmf/common.sh@116 -- # sync 00:11:17.311 02:08:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:17.311 02:08:31 -- nvmf/common.sh@119 -- # set +e 00:11:17.311 02:08:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:17.311 02:08:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:17.311 rmmod nvme_tcp 00:11:17.311 rmmod nvme_fabrics 00:11:17.570 rmmod nvme_keyring 00:11:17.570 02:08:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:17.570 02:08:31 -- nvmf/common.sh@123 -- # set -e 00:11:17.570 02:08:31 -- nvmf/common.sh@124 -- # return 0 00:11:17.570 02:08:31 -- nvmf/common.sh@477 -- # '[' -n 65436 ']' 00:11:17.570 02:08:31 -- nvmf/common.sh@478 -- # killprocess 65436 00:11:17.570 02:08:31 -- common/autotest_common.sh@926 -- # '[' -z 65436 ']' 00:11:17.570 02:08:31 -- common/autotest_common.sh@930 -- # kill -0 65436 00:11:17.570 02:08:31 -- common/autotest_common.sh@931 -- # uname 00:11:17.570 02:08:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:17.570 02:08:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 65436 00:11:17.570 02:08:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:17.570 killing process with pid 65436 00:11:17.570 02:08:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:17.570 02:08:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 65436' 00:11:17.570 02:08:31 -- common/autotest_common.sh@945 -- # kill 65436 00:11:17.570 02:08:31 -- common/autotest_common.sh@950 -- # wait 65436 00:11:17.570 02:08:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:17.570 02:08:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:17.570 02:08:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:17.570 02:08:32 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:17.570 02:08:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:17.570 02:08:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:17.570 02:08:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:17.570 02:08:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:17.829 02:08:32 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:17.829 00:11:17.829 real 0m2.886s 00:11:17.829 user 0m9.550s 00:11:17.829 sys 0m0.629s 00:11:17.829 02:08:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:17.829 ************************************ 00:11:17.829 END TEST nvmf_multitarget 00:11:17.829 ************************************ 00:11:17.829 02:08:32 -- common/autotest_common.sh@10 -- # set +x 00:11:17.829 02:08:32 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:17.829 02:08:32 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:11:17.829 02:08:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:17.829 02:08:32 -- common/autotest_common.sh@10 -- # set +x 00:11:17.829 ************************************ 00:11:17.829 START TEST nvmf_rpc 00:11:17.829 ************************************ 00:11:17.829 02:08:32 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:17.829 * Looking for test storage... 00:11:17.829 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:17.829 02:08:32 -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:17.829 02:08:32 -- nvmf/common.sh@7 -- # uname -s 00:11:17.829 02:08:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:17.829 02:08:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:17.829 02:08:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:17.829 02:08:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:17.829 02:08:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:17.829 02:08:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:17.829 02:08:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:17.829 02:08:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:17.829 02:08:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:17.829 02:08:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:17.829 02:08:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:11:17.829 02:08:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:11:17.829 02:08:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:17.829 02:08:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:17.829 02:08:32 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:17.829 02:08:32 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:17.829 02:08:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:17.829 02:08:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:17.829 02:08:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:17.829 02:08:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.829 02:08:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.829 02:08:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.829 02:08:32 -- paths/export.sh@5 -- # export PATH 00:11:17.830 02:08:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.830 02:08:32 -- nvmf/common.sh@46 -- # : 0 00:11:17.830 02:08:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:17.830 02:08:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:17.830 02:08:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:17.830 02:08:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:17.830 02:08:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:17.830 02:08:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:17.830 02:08:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:17.830 02:08:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:17.830 02:08:32 -- target/rpc.sh@11 -- # loops=5 00:11:17.830 02:08:32 -- target/rpc.sh@23 -- # nvmftestinit 00:11:17.830 02:08:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:17.830 02:08:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:17.830 02:08:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:17.830 02:08:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:17.830 02:08:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:17.830 02:08:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:17.830 02:08:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:17.830 02:08:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:17.830 02:08:32 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:17.830 02:08:32 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:17.830 02:08:32 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:17.830 02:08:32 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:17.830 02:08:32 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:17.830 02:08:32 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:17.830 02:08:32 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:17.830 02:08:32 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:17.830 02:08:32 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:17.830 02:08:32 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:17.830 02:08:32 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:17.830 02:08:32 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:17.830 02:08:32 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:17.830 02:08:32 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:17.830 02:08:32 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:17.830 02:08:32 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:17.830 02:08:32 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:17.830 02:08:32 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:17.830 02:08:32 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:17.830 02:08:32 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:17.830 Cannot find device "nvmf_tgt_br" 00:11:17.830 02:08:32 -- nvmf/common.sh@154 -- # true 00:11:17.830 02:08:32 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:17.830 Cannot find device "nvmf_tgt_br2" 00:11:17.830 02:08:32 -- nvmf/common.sh@155 -- # true 00:11:17.830 02:08:32 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:17.830 02:08:32 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:17.830 Cannot find device "nvmf_tgt_br" 00:11:17.830 02:08:32 -- nvmf/common.sh@157 -- # true 00:11:17.830 02:08:32 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:17.830 Cannot find device "nvmf_tgt_br2" 00:11:17.830 02:08:32 -- nvmf/common.sh@158 -- # true 00:11:17.830 02:08:32 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:18.089 02:08:32 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:18.089 02:08:32 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:18.089 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:18.089 02:08:32 -- nvmf/common.sh@161 -- # true 00:11:18.089 02:08:32 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:18.089 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:18.089 02:08:32 -- nvmf/common.sh@162 -- # true 00:11:18.089 02:08:32 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:18.089 02:08:32 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:18.089 02:08:32 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:18.089 02:08:32 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:18.089 02:08:32 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:18.089 02:08:32 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:18.089 02:08:32 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:18.089 02:08:32 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:18.089 02:08:32 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:18.089 02:08:32 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:18.089 02:08:32 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:18.089 02:08:32 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:18.089 02:08:32 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:18.089 02:08:32 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:18.089 02:08:32 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:18.089 02:08:32 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:18.089 02:08:32 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:18.089 02:08:32 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:18.089 02:08:32 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:18.089 02:08:32 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:18.089 02:08:32 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:18.089 02:08:32 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:18.089 02:08:32 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:18.089 02:08:32 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:18.089 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:18.089 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.101 ms 00:11:18.089 00:11:18.089 --- 10.0.0.2 ping statistics --- 00:11:18.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.089 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:11:18.089 02:08:32 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:18.089 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:18.089 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:11:18.089 00:11:18.089 --- 10.0.0.3 ping statistics --- 00:11:18.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.089 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:11:18.089 02:08:32 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:18.089 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:18.089 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:11:18.089 00:11:18.089 --- 10.0.0.1 ping statistics --- 00:11:18.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.089 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:11:18.089 02:08:32 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:18.089 02:08:32 -- nvmf/common.sh@421 -- # return 0 00:11:18.089 02:08:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:18.089 02:08:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:18.089 02:08:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:18.089 02:08:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:18.089 02:08:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:18.089 02:08:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:18.089 02:08:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:18.089 02:08:32 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:11:18.089 02:08:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:18.089 02:08:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:18.089 02:08:32 -- common/autotest_common.sh@10 -- # set +x 00:11:18.089 02:08:32 -- nvmf/common.sh@469 -- # nvmfpid=65663 00:11:18.089 02:08:32 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:18.089 02:08:32 -- nvmf/common.sh@470 -- # waitforlisten 65663 00:11:18.089 02:08:32 -- common/autotest_common.sh@819 -- # '[' -z 65663 ']' 00:11:18.089 02:08:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:18.089 02:08:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:18.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:18.089 02:08:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:18.089 02:08:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:18.089 02:08:32 -- common/autotest_common.sh@10 -- # set +x 00:11:18.347 [2024-05-14 02:08:32.718060] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:18.347 [2024-05-14 02:08:32.718156] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:18.347 [2024-05-14 02:08:32.859439] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:18.347 [2024-05-14 02:08:32.936235] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:18.605 [2024-05-14 02:08:32.936396] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:18.605 [2024-05-14 02:08:32.936412] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:18.605 [2024-05-14 02:08:32.936422] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:18.605 [2024-05-14 02:08:32.936536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:18.605 [2024-05-14 02:08:32.937340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:18.605 [2024-05-14 02:08:32.937464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:18.605 [2024-05-14 02:08:32.937475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.170 02:08:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:19.170 02:08:33 -- common/autotest_common.sh@852 -- # return 0 00:11:19.170 02:08:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:19.170 02:08:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:19.170 02:08:33 -- common/autotest_common.sh@10 -- # set +x 00:11:19.170 02:08:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:19.170 02:08:33 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:11:19.170 02:08:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:19.170 02:08:33 -- common/autotest_common.sh@10 -- # set +x 00:11:19.170 02:08:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:19.170 02:08:33 -- target/rpc.sh@26 -- # stats='{ 00:11:19.170 "poll_groups": [ 00:11:19.170 { 00:11:19.170 "admin_qpairs": 0, 00:11:19.170 "completed_nvme_io": 0, 00:11:19.170 "current_admin_qpairs": 0, 00:11:19.170 "current_io_qpairs": 0, 00:11:19.170 "io_qpairs": 0, 00:11:19.170 "name": "nvmf_tgt_poll_group_0", 00:11:19.170 "pending_bdev_io": 0, 00:11:19.170 "transports": [] 00:11:19.170 }, 00:11:19.170 { 00:11:19.170 "admin_qpairs": 0, 00:11:19.170 "completed_nvme_io": 0, 00:11:19.170 "current_admin_qpairs": 0, 00:11:19.170 "current_io_qpairs": 0, 00:11:19.170 "io_qpairs": 0, 00:11:19.170 "name": "nvmf_tgt_poll_group_1", 00:11:19.170 "pending_bdev_io": 0, 00:11:19.170 "transports": [] 00:11:19.170 }, 00:11:19.170 { 00:11:19.170 "admin_qpairs": 0, 00:11:19.170 "completed_nvme_io": 0, 00:11:19.170 "current_admin_qpairs": 0, 00:11:19.170 "current_io_qpairs": 0, 00:11:19.170 "io_qpairs": 0, 00:11:19.170 "name": "nvmf_tgt_poll_group_2", 00:11:19.170 "pending_bdev_io": 0, 00:11:19.170 "transports": [] 00:11:19.170 }, 00:11:19.170 { 00:11:19.170 "admin_qpairs": 0, 00:11:19.170 "completed_nvme_io": 0, 00:11:19.170 "current_admin_qpairs": 0, 00:11:19.170 "current_io_qpairs": 0, 00:11:19.170 "io_qpairs": 0, 00:11:19.170 "name": "nvmf_tgt_poll_group_3", 00:11:19.170 "pending_bdev_io": 0, 00:11:19.170 "transports": [] 00:11:19.170 } 00:11:19.170 ], 00:11:19.170 "tick_rate": 2200000000 00:11:19.170 }' 00:11:19.170 02:08:33 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:11:19.170 02:08:33 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:11:19.170 02:08:33 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:11:19.170 02:08:33 -- target/rpc.sh@15 -- # wc -l 00:11:19.426 02:08:33 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:11:19.426 02:08:33 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:11:19.427 02:08:33 -- target/rpc.sh@29 -- # [[ null == null ]] 00:11:19.427 02:08:33 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:19.427 02:08:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:19.427 02:08:33 -- common/autotest_common.sh@10 -- # set +x 00:11:19.427 [2024-05-14 02:08:33.820179] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:19.427 02:08:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:19.427 02:08:33 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:11:19.427 02:08:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:19.427 02:08:33 -- common/autotest_common.sh@10 -- # set +x 00:11:19.427 02:08:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:19.427 02:08:33 -- target/rpc.sh@33 -- # stats='{ 00:11:19.427 "poll_groups": [ 00:11:19.427 { 00:11:19.427 "admin_qpairs": 0, 00:11:19.427 "completed_nvme_io": 0, 00:11:19.427 "current_admin_qpairs": 0, 00:11:19.427 "current_io_qpairs": 0, 00:11:19.427 "io_qpairs": 0, 00:11:19.427 "name": "nvmf_tgt_poll_group_0", 00:11:19.427 "pending_bdev_io": 0, 00:11:19.427 "transports": [ 00:11:19.427 { 00:11:19.427 "trtype": "TCP" 00:11:19.427 } 00:11:19.427 ] 00:11:19.427 }, 00:11:19.427 { 00:11:19.427 "admin_qpairs": 0, 00:11:19.427 "completed_nvme_io": 0, 00:11:19.427 "current_admin_qpairs": 0, 00:11:19.427 "current_io_qpairs": 0, 00:11:19.427 "io_qpairs": 0, 00:11:19.427 "name": "nvmf_tgt_poll_group_1", 00:11:19.427 "pending_bdev_io": 0, 00:11:19.427 "transports": [ 00:11:19.427 { 00:11:19.427 "trtype": "TCP" 00:11:19.427 } 00:11:19.427 ] 00:11:19.427 }, 00:11:19.427 { 00:11:19.427 "admin_qpairs": 0, 00:11:19.427 "completed_nvme_io": 0, 00:11:19.427 "current_admin_qpairs": 0, 00:11:19.427 "current_io_qpairs": 0, 00:11:19.427 "io_qpairs": 0, 00:11:19.427 "name": "nvmf_tgt_poll_group_2", 00:11:19.427 "pending_bdev_io": 0, 00:11:19.427 "transports": [ 00:11:19.427 { 00:11:19.427 "trtype": "TCP" 00:11:19.427 } 00:11:19.427 ] 00:11:19.427 }, 00:11:19.427 { 00:11:19.427 "admin_qpairs": 0, 00:11:19.427 "completed_nvme_io": 0, 00:11:19.427 "current_admin_qpairs": 0, 00:11:19.427 "current_io_qpairs": 0, 00:11:19.427 "io_qpairs": 0, 00:11:19.427 "name": "nvmf_tgt_poll_group_3", 00:11:19.427 "pending_bdev_io": 0, 00:11:19.427 "transports": [ 00:11:19.427 { 00:11:19.427 "trtype": "TCP" 00:11:19.427 } 00:11:19.427 ] 00:11:19.427 } 00:11:19.427 ], 00:11:19.427 "tick_rate": 2200000000 00:11:19.427 }' 00:11:19.427 02:08:33 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:11:19.427 02:08:33 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:19.427 02:08:33 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:19.427 02:08:33 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:19.427 02:08:33 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:11:19.427 02:08:33 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:11:19.427 02:08:33 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:19.427 02:08:33 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:19.427 02:08:33 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:19.427 02:08:33 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:11:19.427 02:08:33 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:11:19.427 02:08:33 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:11:19.427 02:08:33 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:11:19.427 02:08:33 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:19.427 02:08:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:19.427 02:08:33 -- common/autotest_common.sh@10 -- # set +x 00:11:19.427 Malloc1 00:11:19.427 02:08:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:19.427 02:08:33 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:19.427 02:08:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:19.427 02:08:33 -- common/autotest_common.sh@10 -- # set +x 00:11:19.427 02:08:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:19.427 02:08:33 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:19.427 02:08:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:19.427 02:08:33 -- common/autotest_common.sh@10 -- # set +x 00:11:19.427 02:08:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:19.427 02:08:33 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:11:19.427 02:08:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:19.427 02:08:33 -- common/autotest_common.sh@10 -- # set +x 00:11:19.427 02:08:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:19.427 02:08:33 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:19.427 02:08:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:19.427 02:08:33 -- common/autotest_common.sh@10 -- # set +x 00:11:19.427 [2024-05-14 02:08:34.002200] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:19.427 02:08:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:19.427 02:08:34 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 --hostid=01bebc16-ee64-4b1b-82ac-462e1640a9a9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 -a 10.0.0.2 -s 4420 00:11:19.427 02:08:34 -- common/autotest_common.sh@640 -- # local es=0 00:11:19.427 02:08:34 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 --hostid=01bebc16-ee64-4b1b-82ac-462e1640a9a9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 -a 10.0.0.2 -s 4420 00:11:19.427 02:08:34 -- common/autotest_common.sh@628 -- # local arg=nvme 00:11:19.427 02:08:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:19.427 02:08:34 -- common/autotest_common.sh@632 -- # type -t nvme 00:11:19.427 02:08:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:19.427 02:08:34 -- common/autotest_common.sh@634 -- # type -P nvme 00:11:19.427 02:08:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:19.427 02:08:34 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:11:19.427 02:08:34 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:11:19.427 02:08:34 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 --hostid=01bebc16-ee64-4b1b-82ac-462e1640a9a9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 -a 10.0.0.2 -s 4420 00:11:19.685 [2024-05-14 02:08:34.024422] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9' 00:11:19.685 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:19.685 could not add new controller: failed to write to nvme-fabrics device 00:11:19.685 02:08:34 -- common/autotest_common.sh@643 -- # es=1 00:11:19.685 02:08:34 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:11:19.685 02:08:34 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:11:19.685 02:08:34 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:11:19.685 02:08:34 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:11:19.685 02:08:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:19.685 02:08:34 -- common/autotest_common.sh@10 -- # set +x 00:11:19.685 02:08:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:19.685 02:08:34 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 --hostid=01bebc16-ee64-4b1b-82ac-462e1640a9a9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:19.685 02:08:34 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:11:19.685 02:08:34 -- common/autotest_common.sh@1177 -- # local i=0 00:11:19.685 02:08:34 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:19.685 02:08:34 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:19.685 02:08:34 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:22.214 02:08:36 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:22.214 02:08:36 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:11:22.214 02:08:36 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:22.214 02:08:36 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:11:22.214 02:08:36 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:22.214 02:08:36 -- common/autotest_common.sh@1187 -- # return 0 00:11:22.214 02:08:36 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:22.214 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.214 02:08:36 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:22.214 02:08:36 -- common/autotest_common.sh@1198 -- # local i=0 00:11:22.214 02:08:36 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:11:22.214 02:08:36 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:22.214 02:08:36 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:22.214 02:08:36 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:22.214 02:08:36 -- common/autotest_common.sh@1210 -- # return 0 00:11:22.214 02:08:36 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:11:22.214 02:08:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:22.214 02:08:36 -- common/autotest_common.sh@10 -- # set +x 00:11:22.214 02:08:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:22.214 02:08:36 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 --hostid=01bebc16-ee64-4b1b-82ac-462e1640a9a9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:22.214 02:08:36 -- common/autotest_common.sh@640 -- # local es=0 00:11:22.214 02:08:36 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 --hostid=01bebc16-ee64-4b1b-82ac-462e1640a9a9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:22.214 02:08:36 -- common/autotest_common.sh@628 -- # local arg=nvme 00:11:22.214 02:08:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:22.214 02:08:36 -- common/autotest_common.sh@632 -- # type -t nvme 00:11:22.214 02:08:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:22.214 02:08:36 -- common/autotest_common.sh@634 -- # type -P nvme 00:11:22.214 02:08:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:22.214 02:08:36 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:11:22.214 02:08:36 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:11:22.214 02:08:36 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 --hostid=01bebc16-ee64-4b1b-82ac-462e1640a9a9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:22.214 [2024-05-14 02:08:36.325589] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9' 00:11:22.214 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:22.214 could not add new controller: failed to write to nvme-fabrics device 00:11:22.214 02:08:36 -- common/autotest_common.sh@643 -- # es=1 00:11:22.214 02:08:36 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:11:22.214 02:08:36 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:11:22.214 02:08:36 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:11:22.214 02:08:36 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:11:22.214 02:08:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:22.214 02:08:36 -- common/autotest_common.sh@10 -- # set +x 00:11:22.214 02:08:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:22.214 02:08:36 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 --hostid=01bebc16-ee64-4b1b-82ac-462e1640a9a9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:22.214 02:08:36 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:11:22.214 02:08:36 -- common/autotest_common.sh@1177 -- # local i=0 00:11:22.214 02:08:36 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:22.214 02:08:36 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:22.214 02:08:36 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:24.114 02:08:38 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:24.114 02:08:38 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:11:24.114 02:08:38 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:24.114 02:08:38 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:11:24.114 02:08:38 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:24.114 02:08:38 -- common/autotest_common.sh@1187 -- # return 0 00:11:24.114 02:08:38 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:24.114 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.114 02:08:38 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:24.114 02:08:38 -- common/autotest_common.sh@1198 -- # local i=0 00:11:24.114 02:08:38 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:11:24.114 02:08:38 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:24.114 02:08:38 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:24.114 02:08:38 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:24.114 02:08:38 -- common/autotest_common.sh@1210 -- # return 0 00:11:24.114 02:08:38 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:24.114 02:08:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:24.114 02:08:38 -- common/autotest_common.sh@10 -- # set +x 00:11:24.114 02:08:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:24.114 02:08:38 -- target/rpc.sh@81 -- # seq 1 5 00:11:24.114 02:08:38 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:24.114 02:08:38 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:24.114 02:08:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:24.114 02:08:38 -- common/autotest_common.sh@10 -- # set +x 00:11:24.114 02:08:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:24.114 02:08:38 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:24.114 02:08:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:24.114 02:08:38 -- common/autotest_common.sh@10 -- # set +x 00:11:24.114 [2024-05-14 02:08:38.621035] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:24.114 02:08:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:24.114 02:08:38 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:24.114 02:08:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:24.114 02:08:38 -- common/autotest_common.sh@10 -- # set +x 00:11:24.114 02:08:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:24.114 02:08:38 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:24.114 02:08:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:24.114 02:08:38 -- common/autotest_common.sh@10 -- # set +x 00:11:24.114 02:08:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:24.114 02:08:38 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 --hostid=01bebc16-ee64-4b1b-82ac-462e1640a9a9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:24.372 02:08:38 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:24.372 02:08:38 -- common/autotest_common.sh@1177 -- # local i=0 00:11:24.372 02:08:38 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:24.372 02:08:38 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:24.372 02:08:38 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:26.275 02:08:40 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:26.275 02:08:40 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:26.275 02:08:40 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:11:26.275 02:08:40 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:11:26.275 02:08:40 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:26.275 02:08:40 -- common/autotest_common.sh@1187 -- # return 0 00:11:26.275 02:08:40 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:26.533 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:26.533 02:08:40 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:26.533 02:08:40 -- common/autotest_common.sh@1198 -- # local i=0 00:11:26.533 02:08:40 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:26.533 02:08:40 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:11:26.533 02:08:40 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:26.533 02:08:40 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:26.533 02:08:40 -- common/autotest_common.sh@1210 -- # return 0 00:11:26.533 02:08:40 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:26.533 02:08:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:26.533 02:08:40 -- common/autotest_common.sh@10 -- # set +x 00:11:26.533 02:08:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:26.533 02:08:40 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:26.533 02:08:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:26.533 02:08:40 -- common/autotest_common.sh@10 -- # set +x 00:11:26.533 02:08:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:26.533 02:08:40 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:26.533 02:08:40 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:26.533 02:08:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:26.533 02:08:40 -- common/autotest_common.sh@10 -- # set +x 00:11:26.533 02:08:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:26.533 02:08:40 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:26.533 02:08:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:26.533 02:08:40 -- common/autotest_common.sh@10 -- # set +x 00:11:26.533 [2024-05-14 02:08:40.928371] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:26.533 02:08:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:26.533 02:08:40 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:26.533 02:08:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:26.533 02:08:40 -- common/autotest_common.sh@10 -- # set +x 00:11:26.533 02:08:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:26.533 02:08:40 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:26.533 02:08:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:26.533 02:08:40 -- common/autotest_common.sh@10 -- # set +x 00:11:26.533 02:08:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:26.533 02:08:40 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 --hostid=01bebc16-ee64-4b1b-82ac-462e1640a9a9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:26.533 02:08:41 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:26.533 02:08:41 -- common/autotest_common.sh@1177 -- # local i=0 00:11:26.533 02:08:41 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:26.533 02:08:41 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:26.533 02:08:41 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:29.063 02:08:43 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:29.063 02:08:43 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:11:29.063 02:08:43 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:29.063 02:08:43 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:11:29.063 02:08:43 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:29.063 02:08:43 -- common/autotest_common.sh@1187 -- # return 0 00:11:29.063 02:08:43 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:29.063 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.063 02:08:43 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:29.063 02:08:43 -- common/autotest_common.sh@1198 -- # local i=0 00:11:29.063 02:08:43 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:29.063 02:08:43 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:11:29.063 02:08:43 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:29.063 02:08:43 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:29.063 02:08:43 -- common/autotest_common.sh@1210 -- # return 0 00:11:29.063 02:08:43 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:29.063 02:08:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:29.063 02:08:43 -- common/autotest_common.sh@10 -- # set +x 00:11:29.063 02:08:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:29.064 02:08:43 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:29.064 02:08:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:29.064 02:08:43 -- common/autotest_common.sh@10 -- # set +x 00:11:29.064 02:08:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:29.064 02:08:43 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:29.064 02:08:43 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:29.064 02:08:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:29.064 02:08:43 -- common/autotest_common.sh@10 -- # set +x 00:11:29.064 02:08:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:29.064 02:08:43 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:29.064 02:08:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:29.064 02:08:43 -- common/autotest_common.sh@10 -- # set +x 00:11:29.064 [2024-05-14 02:08:43.227887] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:29.064 02:08:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:29.064 02:08:43 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:29.064 02:08:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:29.064 02:08:43 -- common/autotest_common.sh@10 -- # set +x 00:11:29.064 02:08:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:29.064 02:08:43 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:29.064 02:08:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:29.064 02:08:43 -- common/autotest_common.sh@10 -- # set +x 00:11:29.064 02:08:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:29.064 02:08:43 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 --hostid=01bebc16-ee64-4b1b-82ac-462e1640a9a9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:29.064 02:08:43 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:29.064 02:08:43 -- common/autotest_common.sh@1177 -- # local i=0 00:11:29.064 02:08:43 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:29.064 02:08:43 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:29.064 02:08:43 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:30.968 02:08:45 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:30.968 02:08:45 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:30.968 02:08:45 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:11:30.968 02:08:45 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:11:30.968 02:08:45 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:30.968 02:08:45 -- common/autotest_common.sh@1187 -- # return 0 00:11:30.968 02:08:45 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:30.968 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.968 02:08:45 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:30.968 02:08:45 -- common/autotest_common.sh@1198 -- # local i=0 00:11:30.968 02:08:45 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:11:30.968 02:08:45 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:30.968 02:08:45 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:30.968 02:08:45 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:30.968 02:08:45 -- common/autotest_common.sh@1210 -- # return 0 00:11:30.968 02:08:45 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:30.968 02:08:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:30.968 02:08:45 -- common/autotest_common.sh@10 -- # set +x 00:11:30.968 02:08:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:30.968 02:08:45 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:30.968 02:08:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:30.968 02:08:45 -- common/autotest_common.sh@10 -- # set +x 00:11:30.968 02:08:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:30.968 02:08:45 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:30.968 02:08:45 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:30.968 02:08:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:30.968 02:08:45 -- common/autotest_common.sh@10 -- # set +x 00:11:30.968 02:08:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:30.968 02:08:45 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:30.968 02:08:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:30.968 02:08:45 -- common/autotest_common.sh@10 -- # set +x 00:11:30.968 [2024-05-14 02:08:45.535222] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:30.968 02:08:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:30.968 02:08:45 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:30.968 02:08:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:30.968 02:08:45 -- common/autotest_common.sh@10 -- # set +x 00:11:30.968 02:08:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:30.968 02:08:45 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:30.968 02:08:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:30.968 02:08:45 -- common/autotest_common.sh@10 -- # set +x 00:11:31.250 02:08:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:31.250 02:08:45 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 --hostid=01bebc16-ee64-4b1b-82ac-462e1640a9a9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:31.250 02:08:45 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:31.250 02:08:45 -- common/autotest_common.sh@1177 -- # local i=0 00:11:31.250 02:08:45 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:31.250 02:08:45 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:31.250 02:08:45 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:33.161 02:08:47 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:33.161 02:08:47 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:33.161 02:08:47 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:11:33.161 02:08:47 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:11:33.161 02:08:47 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:33.161 02:08:47 -- common/autotest_common.sh@1187 -- # return 0 00:11:33.161 02:08:47 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:33.419 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.419 02:08:47 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:33.419 02:08:47 -- common/autotest_common.sh@1198 -- # local i=0 00:11:33.419 02:08:47 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:11:33.419 02:08:47 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:33.419 02:08:47 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:33.419 02:08:47 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:33.419 02:08:47 -- common/autotest_common.sh@1210 -- # return 0 00:11:33.419 02:08:47 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:33.419 02:08:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:33.419 02:08:47 -- common/autotest_common.sh@10 -- # set +x 00:11:33.419 02:08:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:33.419 02:08:47 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:33.419 02:08:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:33.419 02:08:47 -- common/autotest_common.sh@10 -- # set +x 00:11:33.419 02:08:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:33.419 02:08:47 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:33.419 02:08:47 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:33.419 02:08:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:33.419 02:08:47 -- common/autotest_common.sh@10 -- # set +x 00:11:33.419 02:08:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:33.419 02:08:47 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:33.419 02:08:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:33.419 02:08:47 -- common/autotest_common.sh@10 -- # set +x 00:11:33.419 [2024-05-14 02:08:47.842349] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:33.419 02:08:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:33.419 02:08:47 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:33.419 02:08:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:33.419 02:08:47 -- common/autotest_common.sh@10 -- # set +x 00:11:33.419 02:08:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:33.419 02:08:47 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:33.419 02:08:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:33.419 02:08:47 -- common/autotest_common.sh@10 -- # set +x 00:11:33.419 02:08:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:33.420 02:08:47 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 --hostid=01bebc16-ee64-4b1b-82ac-462e1640a9a9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:33.677 02:08:48 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:33.677 02:08:48 -- common/autotest_common.sh@1177 -- # local i=0 00:11:33.677 02:08:48 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:33.677 02:08:48 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:33.677 02:08:48 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:35.578 02:08:50 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:35.578 02:08:50 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:35.578 02:08:50 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:11:35.578 02:08:50 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:11:35.578 02:08:50 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:35.578 02:08:50 -- common/autotest_common.sh@1187 -- # return 0 00:11:35.578 02:08:50 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:35.578 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.578 02:08:50 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:35.578 02:08:50 -- common/autotest_common.sh@1198 -- # local i=0 00:11:35.578 02:08:50 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:11:35.578 02:08:50 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:35.578 02:08:50 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:35.578 02:08:50 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:35.578 02:08:50 -- common/autotest_common.sh@1210 -- # return 0 00:11:35.578 02:08:50 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:35.578 02:08:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:35.578 02:08:50 -- common/autotest_common.sh@10 -- # set +x 00:11:35.578 02:08:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:35.578 02:08:50 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:35.578 02:08:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:35.578 02:08:50 -- common/autotest_common.sh@10 -- # set +x 00:11:35.578 02:08:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:35.578 02:08:50 -- target/rpc.sh@99 -- # seq 1 5 00:11:35.578 02:08:50 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:35.578 02:08:50 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:35.578 02:08:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:35.578 02:08:50 -- common/autotest_common.sh@10 -- # set +x 00:11:35.578 02:08:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:35.578 02:08:50 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:35.578 02:08:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:35.578 02:08:50 -- common/autotest_common.sh@10 -- # set +x 00:11:35.578 [2024-05-14 02:08:50.165456] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:35.838 02:08:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:35.838 02:08:50 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:35.838 02:08:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:35.838 02:08:50 -- common/autotest_common.sh@10 -- # set +x 00:11:35.838 02:08:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:35.838 02:08:50 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:35.838 02:08:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:35.838 02:08:50 -- common/autotest_common.sh@10 -- # set +x 00:11:35.838 02:08:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:35.838 02:08:50 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:35.838 02:08:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:35.838 02:08:50 -- common/autotest_common.sh@10 -- # set +x 00:11:35.838 02:08:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:35.838 02:08:50 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:35.838 02:08:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:35.838 02:08:50 -- common/autotest_common.sh@10 -- # set +x 00:11:35.838 02:08:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:35.839 02:08:50 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:35.839 02:08:50 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:35.839 02:08:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:35.839 02:08:50 -- common/autotest_common.sh@10 -- # set +x 00:11:35.839 02:08:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:35.839 02:08:50 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:35.839 02:08:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:35.839 02:08:50 -- common/autotest_common.sh@10 -- # set +x 00:11:35.839 [2024-05-14 02:08:50.213481] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:35.839 02:08:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:35.839 02:08:50 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:35.839 02:08:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:35.839 02:08:50 -- common/autotest_common.sh@10 -- # set +x 00:11:35.839 02:08:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:35.839 02:08:50 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:35.839 02:08:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:35.839 02:08:50 -- common/autotest_common.sh@10 -- # set +x 00:11:35.839 02:08:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:35.839 02:08:50 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:35.839 02:08:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:35.839 02:08:50 -- common/autotest_common.sh@10 -- # set +x 00:11:35.839 02:08:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:35.839 02:08:50 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:35.839 02:08:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:35.839 02:08:50 -- common/autotest_common.sh@10 -- # set +x 00:11:35.839 02:08:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:35.839 02:08:50 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:35.839 02:08:50 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:35.839 02:08:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:35.839 02:08:50 -- common/autotest_common.sh@10 -- # set +x 00:11:35.839 02:08:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:35.839 02:08:50 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:35.839 02:08:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:35.839 02:08:50 -- common/autotest_common.sh@10 -- # set +x 00:11:35.839 [2024-05-14 02:08:50.261523] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:35.839 02:08:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:35.839 02:08:50 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:35.839 02:08:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:35.839 02:08:50 -- common/autotest_common.sh@10 -- # set +x 00:11:35.839 02:08:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:35.839 02:08:50 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:35.839 02:08:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:35.839 02:08:50 -- common/autotest_common.sh@10 -- # set +x 00:11:35.839 02:08:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:35.839 02:08:50 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:35.839 02:08:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:35.839 02:08:50 -- common/autotest_common.sh@10 -- # set +x 00:11:35.839 02:08:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:35.839 02:08:50 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:35.839 02:08:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:35.839 02:08:50 -- common/autotest_common.sh@10 -- # set +x 00:11:35.839 02:08:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:35.839 02:08:50 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:35.839 02:08:50 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:35.839 02:08:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:35.839 02:08:50 -- common/autotest_common.sh@10 -- # set +x 00:11:35.839 02:08:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:35.839 02:08:50 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:35.839 02:08:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:35.839 02:08:50 -- common/autotest_common.sh@10 -- # set +x 00:11:35.839 [2024-05-14 02:08:50.309595] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:35.839 02:08:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:35.839 02:08:50 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:35.839 02:08:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:35.839 02:08:50 -- common/autotest_common.sh@10 -- # set +x 00:11:35.839 02:08:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:35.839 02:08:50 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:35.839 02:08:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:35.839 02:08:50 -- common/autotest_common.sh@10 -- # set +x 00:11:35.839 02:08:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:35.839 02:08:50 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:35.839 02:08:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:35.839 02:08:50 -- common/autotest_common.sh@10 -- # set +x 00:11:35.839 02:08:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:35.839 02:08:50 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:35.839 02:08:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:35.839 02:08:50 -- common/autotest_common.sh@10 -- # set +x 00:11:35.839 02:08:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:35.839 02:08:50 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:35.839 02:08:50 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:35.839 02:08:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:35.839 02:08:50 -- common/autotest_common.sh@10 -- # set +x 00:11:35.839 02:08:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:35.839 02:08:50 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:35.839 02:08:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:35.839 02:08:50 -- common/autotest_common.sh@10 -- # set +x 00:11:35.839 [2024-05-14 02:08:50.357636] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:35.839 02:08:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:35.839 02:08:50 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:35.839 02:08:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:35.839 02:08:50 -- common/autotest_common.sh@10 -- # set +x 00:11:35.839 02:08:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:35.839 02:08:50 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:35.839 02:08:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:35.839 02:08:50 -- common/autotest_common.sh@10 -- # set +x 00:11:35.839 02:08:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:35.839 02:08:50 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:35.839 02:08:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:35.839 02:08:50 -- common/autotest_common.sh@10 -- # set +x 00:11:35.839 02:08:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:35.839 02:08:50 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:35.839 02:08:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:35.839 02:08:50 -- common/autotest_common.sh@10 -- # set +x 00:11:35.839 02:08:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:35.839 02:08:50 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:11:35.839 02:08:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:35.839 02:08:50 -- common/autotest_common.sh@10 -- # set +x 00:11:35.839 02:08:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:35.839 02:08:50 -- target/rpc.sh@110 -- # stats='{ 00:11:35.839 "poll_groups": [ 00:11:35.839 { 00:11:35.839 "admin_qpairs": 2, 00:11:35.839 "completed_nvme_io": 67, 00:11:35.839 "current_admin_qpairs": 0, 00:11:35.839 "current_io_qpairs": 0, 00:11:35.839 "io_qpairs": 16, 00:11:35.839 "name": "nvmf_tgt_poll_group_0", 00:11:35.839 "pending_bdev_io": 0, 00:11:35.839 "transports": [ 00:11:35.839 { 00:11:35.839 "trtype": "TCP" 00:11:35.839 } 00:11:35.839 ] 00:11:35.839 }, 00:11:35.839 { 00:11:35.839 "admin_qpairs": 3, 00:11:35.839 "completed_nvme_io": 68, 00:11:35.839 "current_admin_qpairs": 0, 00:11:35.839 "current_io_qpairs": 0, 00:11:35.839 "io_qpairs": 17, 00:11:35.839 "name": "nvmf_tgt_poll_group_1", 00:11:35.839 "pending_bdev_io": 0, 00:11:35.839 "transports": [ 00:11:35.839 { 00:11:35.839 "trtype": "TCP" 00:11:35.839 } 00:11:35.839 ] 00:11:35.839 }, 00:11:35.839 { 00:11:35.839 "admin_qpairs": 1, 00:11:35.839 "completed_nvme_io": 70, 00:11:35.839 "current_admin_qpairs": 0, 00:11:35.839 "current_io_qpairs": 0, 00:11:35.839 "io_qpairs": 19, 00:11:35.839 "name": "nvmf_tgt_poll_group_2", 00:11:35.839 "pending_bdev_io": 0, 00:11:35.839 "transports": [ 00:11:35.839 { 00:11:35.839 "trtype": "TCP" 00:11:35.839 } 00:11:35.839 ] 00:11:35.839 }, 00:11:35.839 { 00:11:35.839 "admin_qpairs": 1, 00:11:35.839 "completed_nvme_io": 215, 00:11:35.839 "current_admin_qpairs": 0, 00:11:35.839 "current_io_qpairs": 0, 00:11:35.839 "io_qpairs": 18, 00:11:35.839 "name": "nvmf_tgt_poll_group_3", 00:11:35.839 "pending_bdev_io": 0, 00:11:35.839 "transports": [ 00:11:35.839 { 00:11:35.839 "trtype": "TCP" 00:11:35.839 } 00:11:35.839 ] 00:11:35.839 } 00:11:35.839 ], 00:11:35.839 "tick_rate": 2200000000 00:11:35.839 }' 00:11:35.839 02:08:50 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:11:35.839 02:08:50 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:35.839 02:08:50 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:35.839 02:08:50 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:36.099 02:08:50 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:11:36.099 02:08:50 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:11:36.099 02:08:50 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:36.099 02:08:50 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:36.099 02:08:50 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:36.099 02:08:50 -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:11:36.099 02:08:50 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:11:36.099 02:08:50 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:11:36.099 02:08:50 -- target/rpc.sh@123 -- # nvmftestfini 00:11:36.099 02:08:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:36.099 02:08:50 -- nvmf/common.sh@116 -- # sync 00:11:36.099 02:08:50 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:36.099 02:08:50 -- nvmf/common.sh@119 -- # set +e 00:11:36.099 02:08:50 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:36.099 02:08:50 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:36.099 rmmod nvme_tcp 00:11:36.099 rmmod nvme_fabrics 00:11:36.099 rmmod nvme_keyring 00:11:36.099 02:08:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:36.099 02:08:50 -- nvmf/common.sh@123 -- # set -e 00:11:36.099 02:08:50 -- nvmf/common.sh@124 -- # return 0 00:11:36.099 02:08:50 -- nvmf/common.sh@477 -- # '[' -n 65663 ']' 00:11:36.099 02:08:50 -- nvmf/common.sh@478 -- # killprocess 65663 00:11:36.099 02:08:50 -- common/autotest_common.sh@926 -- # '[' -z 65663 ']' 00:11:36.099 02:08:50 -- common/autotest_common.sh@930 -- # kill -0 65663 00:11:36.099 02:08:50 -- common/autotest_common.sh@931 -- # uname 00:11:36.099 02:08:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:36.099 02:08:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 65663 00:11:36.099 02:08:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:36.099 02:08:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:36.099 02:08:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 65663' 00:11:36.099 killing process with pid 65663 00:11:36.099 02:08:50 -- common/autotest_common.sh@945 -- # kill 65663 00:11:36.100 02:08:50 -- common/autotest_common.sh@950 -- # wait 65663 00:11:36.359 02:08:50 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:36.359 02:08:50 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:36.359 02:08:50 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:36.359 02:08:50 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:36.359 02:08:50 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:36.359 02:08:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:36.359 02:08:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:36.359 02:08:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:36.359 02:08:50 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:36.359 00:11:36.359 real 0m18.631s 00:11:36.359 user 1m10.552s 00:11:36.359 sys 0m2.343s 00:11:36.359 02:08:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:36.359 02:08:50 -- common/autotest_common.sh@10 -- # set +x 00:11:36.359 ************************************ 00:11:36.359 END TEST nvmf_rpc 00:11:36.359 ************************************ 00:11:36.359 02:08:50 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:36.359 02:08:50 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:11:36.359 02:08:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:36.359 02:08:50 -- common/autotest_common.sh@10 -- # set +x 00:11:36.359 ************************************ 00:11:36.359 START TEST nvmf_invalid 00:11:36.359 ************************************ 00:11:36.359 02:08:50 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:36.618 * Looking for test storage... 00:11:36.618 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:36.619 02:08:50 -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:36.619 02:08:50 -- nvmf/common.sh@7 -- # uname -s 00:11:36.619 02:08:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:36.619 02:08:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:36.619 02:08:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:36.619 02:08:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:36.619 02:08:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:36.619 02:08:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:36.619 02:08:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:36.619 02:08:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:36.619 02:08:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:36.619 02:08:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:36.619 02:08:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:11:36.619 02:08:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:11:36.619 02:08:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:36.619 02:08:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:36.619 02:08:51 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:36.619 02:08:51 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:36.619 02:08:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:36.619 02:08:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:36.619 02:08:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:36.619 02:08:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.619 02:08:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.619 02:08:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.619 02:08:51 -- paths/export.sh@5 -- # export PATH 00:11:36.619 02:08:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.619 02:08:51 -- nvmf/common.sh@46 -- # : 0 00:11:36.619 02:08:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:36.619 02:08:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:36.619 02:08:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:36.619 02:08:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:36.619 02:08:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:36.619 02:08:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:36.619 02:08:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:36.619 02:08:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:36.619 02:08:51 -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:11:36.619 02:08:51 -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:36.619 02:08:51 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:11:36.619 02:08:51 -- target/invalid.sh@14 -- # target=foobar 00:11:36.619 02:08:51 -- target/invalid.sh@16 -- # RANDOM=0 00:11:36.619 02:08:51 -- target/invalid.sh@34 -- # nvmftestinit 00:11:36.619 02:08:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:36.619 02:08:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:36.619 02:08:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:36.619 02:08:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:36.619 02:08:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:36.619 02:08:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:36.619 02:08:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:36.619 02:08:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:36.619 02:08:51 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:36.619 02:08:51 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:36.619 02:08:51 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:36.619 02:08:51 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:36.619 02:08:51 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:36.619 02:08:51 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:36.619 02:08:51 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:36.619 02:08:51 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:36.619 02:08:51 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:36.619 02:08:51 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:36.619 02:08:51 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:36.619 02:08:51 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:36.619 02:08:51 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:36.619 02:08:51 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:36.619 02:08:51 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:36.619 02:08:51 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:36.619 02:08:51 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:36.619 02:08:51 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:36.619 02:08:51 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:36.619 02:08:51 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:36.619 Cannot find device "nvmf_tgt_br" 00:11:36.619 02:08:51 -- nvmf/common.sh@154 -- # true 00:11:36.619 02:08:51 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:36.619 Cannot find device "nvmf_tgt_br2" 00:11:36.619 02:08:51 -- nvmf/common.sh@155 -- # true 00:11:36.619 02:08:51 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:36.619 02:08:51 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:36.619 Cannot find device "nvmf_tgt_br" 00:11:36.619 02:08:51 -- nvmf/common.sh@157 -- # true 00:11:36.619 02:08:51 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:36.619 Cannot find device "nvmf_tgt_br2" 00:11:36.619 02:08:51 -- nvmf/common.sh@158 -- # true 00:11:36.619 02:08:51 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:36.619 02:08:51 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:36.619 02:08:51 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:36.619 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:36.619 02:08:51 -- nvmf/common.sh@161 -- # true 00:11:36.619 02:08:51 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:36.619 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:36.619 02:08:51 -- nvmf/common.sh@162 -- # true 00:11:36.619 02:08:51 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:36.619 02:08:51 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:36.619 02:08:51 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:36.619 02:08:51 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:36.619 02:08:51 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:36.619 02:08:51 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:36.879 02:08:51 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:36.879 02:08:51 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:36.879 02:08:51 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:36.879 02:08:51 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:36.879 02:08:51 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:36.879 02:08:51 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:36.879 02:08:51 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:36.879 02:08:51 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:36.879 02:08:51 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:36.879 02:08:51 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:36.879 02:08:51 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:36.879 02:08:51 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:36.879 02:08:51 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:36.879 02:08:51 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:36.879 02:08:51 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:36.879 02:08:51 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:36.879 02:08:51 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:36.879 02:08:51 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:36.879 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:36.879 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:11:36.879 00:11:36.879 --- 10.0.0.2 ping statistics --- 00:11:36.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:36.879 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:11:36.879 02:08:51 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:36.879 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:36.879 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:11:36.879 00:11:36.879 --- 10.0.0.3 ping statistics --- 00:11:36.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:36.879 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:11:36.879 02:08:51 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:36.879 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:36.879 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:11:36.879 00:11:36.879 --- 10.0.0.1 ping statistics --- 00:11:36.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:36.879 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:11:36.879 02:08:51 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:36.879 02:08:51 -- nvmf/common.sh@421 -- # return 0 00:11:36.879 02:08:51 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:36.879 02:08:51 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:36.879 02:08:51 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:36.879 02:08:51 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:36.879 02:08:51 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:36.879 02:08:51 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:36.879 02:08:51 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:36.879 02:08:51 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:11:36.879 02:08:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:36.879 02:08:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:36.879 02:08:51 -- common/autotest_common.sh@10 -- # set +x 00:11:36.879 02:08:51 -- nvmf/common.sh@469 -- # nvmfpid=66176 00:11:36.879 02:08:51 -- nvmf/common.sh@470 -- # waitforlisten 66176 00:11:36.879 02:08:51 -- common/autotest_common.sh@819 -- # '[' -z 66176 ']' 00:11:36.879 02:08:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:36.879 02:08:51 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:36.879 02:08:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:36.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:36.879 02:08:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:36.879 02:08:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:36.879 02:08:51 -- common/autotest_common.sh@10 -- # set +x 00:11:36.879 [2024-05-14 02:08:51.434724] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:36.879 [2024-05-14 02:08:51.434841] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:37.138 [2024-05-14 02:08:51.576334] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:37.138 [2024-05-14 02:08:51.647885] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:37.138 [2024-05-14 02:08:51.648050] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:37.138 [2024-05-14 02:08:51.648066] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:37.138 [2024-05-14 02:08:51.648077] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:37.138 [2024-05-14 02:08:51.648187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:37.138 [2024-05-14 02:08:51.648471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:37.138 [2024-05-14 02:08:51.648540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:37.138 [2024-05-14 02:08:51.648542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.073 02:08:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:38.073 02:08:52 -- common/autotest_common.sh@852 -- # return 0 00:11:38.073 02:08:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:38.073 02:08:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:38.073 02:08:52 -- common/autotest_common.sh@10 -- # set +x 00:11:38.073 02:08:52 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:38.073 02:08:52 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:38.073 02:08:52 -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode9447 00:11:38.332 [2024-05-14 02:08:52.722378] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:11:38.332 02:08:52 -- target/invalid.sh@40 -- # out='2024/05/14 02:08:52 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode9447 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:11:38.332 request: 00:11:38.332 { 00:11:38.332 "method": "nvmf_create_subsystem", 00:11:38.332 "params": { 00:11:38.332 "nqn": "nqn.2016-06.io.spdk:cnode9447", 00:11:38.332 "tgt_name": "foobar" 00:11:38.332 } 00:11:38.332 } 00:11:38.332 Got JSON-RPC error response 00:11:38.332 GoRPCClient: error on JSON-RPC call' 00:11:38.332 02:08:52 -- target/invalid.sh@41 -- # [[ 2024/05/14 02:08:52 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode9447 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:11:38.332 request: 00:11:38.332 { 00:11:38.332 "method": "nvmf_create_subsystem", 00:11:38.332 "params": { 00:11:38.332 "nqn": "nqn.2016-06.io.spdk:cnode9447", 00:11:38.332 "tgt_name": "foobar" 00:11:38.332 } 00:11:38.332 } 00:11:38.332 Got JSON-RPC error response 00:11:38.332 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:11:38.332 02:08:52 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:11:38.332 02:08:52 -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode11619 00:11:38.591 [2024-05-14 02:08:53.006683] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11619: invalid serial number 'SPDKISFASTANDAWESOME' 00:11:38.591 02:08:53 -- target/invalid.sh@45 -- # out='2024/05/14 02:08:53 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode11619 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:11:38.591 request: 00:11:38.591 { 00:11:38.592 "method": "nvmf_create_subsystem", 00:11:38.592 "params": { 00:11:38.592 "nqn": "nqn.2016-06.io.spdk:cnode11619", 00:11:38.592 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:11:38.592 } 00:11:38.592 } 00:11:38.592 Got JSON-RPC error response 00:11:38.592 GoRPCClient: error on JSON-RPC call' 00:11:38.592 02:08:53 -- target/invalid.sh@46 -- # [[ 2024/05/14 02:08:53 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode11619 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:11:38.592 request: 00:11:38.592 { 00:11:38.592 "method": "nvmf_create_subsystem", 00:11:38.592 "params": { 00:11:38.592 "nqn": "nqn.2016-06.io.spdk:cnode11619", 00:11:38.592 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:11:38.592 } 00:11:38.592 } 00:11:38.592 Got JSON-RPC error response 00:11:38.592 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:38.592 02:08:53 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:11:38.592 02:08:53 -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode30438 00:11:38.851 [2024-05-14 02:08:53.278923] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30438: invalid model number 'SPDK_Controller' 00:11:38.851 02:08:53 -- target/invalid.sh@50 -- # out='2024/05/14 02:08:53 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode30438], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:11:38.851 request: 00:11:38.851 { 00:11:38.851 "method": "nvmf_create_subsystem", 00:11:38.851 "params": { 00:11:38.851 "nqn": "nqn.2016-06.io.spdk:cnode30438", 00:11:38.851 "model_number": "SPDK_Controller\u001f" 00:11:38.851 } 00:11:38.851 } 00:11:38.851 Got JSON-RPC error response 00:11:38.851 GoRPCClient: error on JSON-RPC call' 00:11:38.851 02:08:53 -- target/invalid.sh@51 -- # [[ 2024/05/14 02:08:53 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode30438], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:11:38.851 request: 00:11:38.851 { 00:11:38.851 "method": "nvmf_create_subsystem", 00:11:38.851 "params": { 00:11:38.851 "nqn": "nqn.2016-06.io.spdk:cnode30438", 00:11:38.851 "model_number": "SPDK_Controller\u001f" 00:11:38.851 } 00:11:38.851 } 00:11:38.851 Got JSON-RPC error response 00:11:38.851 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:38.851 02:08:53 -- target/invalid.sh@54 -- # gen_random_s 21 00:11:38.851 02:08:53 -- target/invalid.sh@19 -- # local length=21 ll 00:11:38.851 02:08:53 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:38.851 02:08:53 -- target/invalid.sh@21 -- # local chars 00:11:38.851 02:08:53 -- target/invalid.sh@22 -- # local string 00:11:38.851 02:08:53 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:38.851 02:08:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:38.851 02:08:53 -- target/invalid.sh@25 -- # printf %x 103 00:11:38.851 02:08:53 -- target/invalid.sh@25 -- # echo -e '\x67' 00:11:38.851 02:08:53 -- target/invalid.sh@25 -- # string+=g 00:11:38.851 02:08:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:38.851 02:08:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:38.851 02:08:53 -- target/invalid.sh@25 -- # printf %x 121 00:11:38.851 02:08:53 -- target/invalid.sh@25 -- # echo -e '\x79' 00:11:38.851 02:08:53 -- target/invalid.sh@25 -- # string+=y 00:11:38.851 02:08:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:38.851 02:08:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:38.851 02:08:53 -- target/invalid.sh@25 -- # printf %x 94 00:11:38.851 02:08:53 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:11:38.851 02:08:53 -- target/invalid.sh@25 -- # string+='^' 00:11:38.851 02:08:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:38.851 02:08:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:38.851 02:08:53 -- target/invalid.sh@25 -- # printf %x 72 00:11:38.851 02:08:53 -- target/invalid.sh@25 -- # echo -e '\x48' 00:11:38.851 02:08:53 -- target/invalid.sh@25 -- # string+=H 00:11:38.851 02:08:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:38.851 02:08:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:38.851 02:08:53 -- target/invalid.sh@25 -- # printf %x 79 00:11:38.851 02:08:53 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:11:38.851 02:08:53 -- target/invalid.sh@25 -- # string+=O 00:11:38.851 02:08:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:38.851 02:08:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:38.851 02:08:53 -- target/invalid.sh@25 -- # printf %x 112 00:11:38.851 02:08:53 -- target/invalid.sh@25 -- # echo -e '\x70' 00:11:38.851 02:08:53 -- target/invalid.sh@25 -- # string+=p 00:11:38.851 02:08:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:38.851 02:08:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:38.851 02:08:53 -- target/invalid.sh@25 -- # printf %x 87 00:11:38.851 02:08:53 -- target/invalid.sh@25 -- # echo -e '\x57' 00:11:38.851 02:08:53 -- target/invalid.sh@25 -- # string+=W 00:11:38.852 02:08:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:38.852 02:08:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:38.852 02:08:53 -- target/invalid.sh@25 -- # printf %x 35 00:11:38.852 02:08:53 -- target/invalid.sh@25 -- # echo -e '\x23' 00:11:38.852 02:08:53 -- target/invalid.sh@25 -- # string+='#' 00:11:38.852 02:08:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:38.852 02:08:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:38.852 02:08:53 -- target/invalid.sh@25 -- # printf %x 114 00:11:38.852 02:08:53 -- target/invalid.sh@25 -- # echo -e '\x72' 00:11:38.852 02:08:53 -- target/invalid.sh@25 -- # string+=r 00:11:38.852 02:08:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:38.852 02:08:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:38.852 02:08:53 -- target/invalid.sh@25 -- # printf %x 81 00:11:38.852 02:08:53 -- target/invalid.sh@25 -- # echo -e '\x51' 00:11:38.852 02:08:53 -- target/invalid.sh@25 -- # string+=Q 00:11:38.852 02:08:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:38.852 02:08:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:38.852 02:08:53 -- target/invalid.sh@25 -- # printf %x 34 00:11:38.852 02:08:53 -- target/invalid.sh@25 -- # echo -e '\x22' 00:11:38.852 02:08:53 -- target/invalid.sh@25 -- # string+='"' 00:11:38.852 02:08:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:38.852 02:08:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:38.852 02:08:53 -- target/invalid.sh@25 -- # printf %x 60 00:11:38.852 02:08:53 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:11:38.852 02:08:53 -- target/invalid.sh@25 -- # string+='<' 00:11:38.852 02:08:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:38.852 02:08:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:38.852 02:08:53 -- target/invalid.sh@25 -- # printf %x 87 00:11:38.852 02:08:53 -- target/invalid.sh@25 -- # echo -e '\x57' 00:11:38.852 02:08:53 -- target/invalid.sh@25 -- # string+=W 00:11:38.852 02:08:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:38.852 02:08:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:38.852 02:08:53 -- target/invalid.sh@25 -- # printf %x 80 00:11:38.852 02:08:53 -- target/invalid.sh@25 -- # echo -e '\x50' 00:11:38.852 02:08:53 -- target/invalid.sh@25 -- # string+=P 00:11:38.852 02:08:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:38.852 02:08:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:38.852 02:08:53 -- target/invalid.sh@25 -- # printf %x 103 00:11:38.852 02:08:53 -- target/invalid.sh@25 -- # echo -e '\x67' 00:11:38.852 02:08:53 -- target/invalid.sh@25 -- # string+=g 00:11:38.852 02:08:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:38.852 02:08:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:38.852 02:08:53 -- target/invalid.sh@25 -- # printf %x 76 00:11:38.852 02:08:53 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:11:38.852 02:08:53 -- target/invalid.sh@25 -- # string+=L 00:11:38.852 02:08:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:38.852 02:08:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:38.852 02:08:53 -- target/invalid.sh@25 -- # printf %x 79 00:11:38.852 02:08:53 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:11:38.852 02:08:53 -- target/invalid.sh@25 -- # string+=O 00:11:38.852 02:08:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:38.852 02:08:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:38.852 02:08:53 -- target/invalid.sh@25 -- # printf %x 80 00:11:38.852 02:08:53 -- target/invalid.sh@25 -- # echo -e '\x50' 00:11:38.852 02:08:53 -- target/invalid.sh@25 -- # string+=P 00:11:38.852 02:08:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:38.852 02:08:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:38.852 02:08:53 -- target/invalid.sh@25 -- # printf %x 55 00:11:38.852 02:08:53 -- target/invalid.sh@25 -- # echo -e '\x37' 00:11:38.852 02:08:53 -- target/invalid.sh@25 -- # string+=7 00:11:38.852 02:08:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:38.852 02:08:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:38.852 02:08:53 -- target/invalid.sh@25 -- # printf %x 93 00:11:38.852 02:08:53 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:11:38.852 02:08:53 -- target/invalid.sh@25 -- # string+=']' 00:11:38.852 02:08:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:38.852 02:08:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:38.852 02:08:53 -- target/invalid.sh@25 -- # printf %x 78 00:11:38.852 02:08:53 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:11:38.852 02:08:53 -- target/invalid.sh@25 -- # string+=N 00:11:38.852 02:08:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:38.852 02:08:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:38.852 02:08:53 -- target/invalid.sh@28 -- # [[ g == \- ]] 00:11:38.852 02:08:53 -- target/invalid.sh@31 -- # echo 'gy^HOpW#rQ" /dev/null' 00:11:42.400 02:08:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:42.400 02:08:56 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:42.400 00:11:42.400 real 0m5.931s 00:11:42.400 user 0m24.039s 00:11:42.400 sys 0m1.176s 00:11:42.400 02:08:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:42.400 02:08:56 -- common/autotest_common.sh@10 -- # set +x 00:11:42.400 ************************************ 00:11:42.400 END TEST nvmf_invalid 00:11:42.400 ************************************ 00:11:42.400 02:08:56 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:11:42.400 02:08:56 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:11:42.400 02:08:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:42.400 02:08:56 -- common/autotest_common.sh@10 -- # set +x 00:11:42.400 ************************************ 00:11:42.400 START TEST nvmf_abort 00:11:42.400 ************************************ 00:11:42.400 02:08:56 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:11:42.400 * Looking for test storage... 00:11:42.400 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:42.400 02:08:56 -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:42.400 02:08:56 -- nvmf/common.sh@7 -- # uname -s 00:11:42.400 02:08:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:42.400 02:08:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:42.400 02:08:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:42.400 02:08:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:42.400 02:08:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:42.400 02:08:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:42.400 02:08:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:42.400 02:08:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:42.400 02:08:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:42.400 02:08:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:42.400 02:08:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:11:42.400 02:08:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:11:42.400 02:08:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:42.400 02:08:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:42.400 02:08:56 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:42.400 02:08:56 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:42.400 02:08:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:42.400 02:08:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:42.400 02:08:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:42.400 02:08:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.400 02:08:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.400 02:08:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.400 02:08:56 -- paths/export.sh@5 -- # export PATH 00:11:42.400 02:08:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.400 02:08:56 -- nvmf/common.sh@46 -- # : 0 00:11:42.400 02:08:56 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:42.400 02:08:56 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:42.400 02:08:56 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:42.400 02:08:56 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:42.400 02:08:56 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:42.400 02:08:56 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:42.400 02:08:56 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:42.400 02:08:56 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:42.400 02:08:56 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:42.400 02:08:56 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:11:42.400 02:08:56 -- target/abort.sh@14 -- # nvmftestinit 00:11:42.400 02:08:56 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:42.400 02:08:56 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:42.400 02:08:56 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:42.400 02:08:56 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:42.400 02:08:56 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:42.400 02:08:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:42.400 02:08:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:42.400 02:08:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:42.659 02:08:56 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:42.659 02:08:56 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:42.659 02:08:56 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:42.659 02:08:56 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:42.659 02:08:56 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:42.659 02:08:56 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:42.659 02:08:56 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:42.659 02:08:56 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:42.659 02:08:56 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:42.659 02:08:56 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:42.659 02:08:56 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:42.659 02:08:56 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:42.659 02:08:56 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:42.659 02:08:56 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:42.659 02:08:56 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:42.659 02:08:56 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:42.659 02:08:56 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:42.659 02:08:56 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:42.659 02:08:56 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:42.659 02:08:57 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:42.659 Cannot find device "nvmf_tgt_br" 00:11:42.659 02:08:57 -- nvmf/common.sh@154 -- # true 00:11:42.659 02:08:57 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:42.659 Cannot find device "nvmf_tgt_br2" 00:11:42.659 02:08:57 -- nvmf/common.sh@155 -- # true 00:11:42.659 02:08:57 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:42.659 02:08:57 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:42.659 Cannot find device "nvmf_tgt_br" 00:11:42.659 02:08:57 -- nvmf/common.sh@157 -- # true 00:11:42.659 02:08:57 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:42.659 Cannot find device "nvmf_tgt_br2" 00:11:42.659 02:08:57 -- nvmf/common.sh@158 -- # true 00:11:42.659 02:08:57 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:42.659 02:08:57 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:42.659 02:08:57 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:42.659 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:42.659 02:08:57 -- nvmf/common.sh@161 -- # true 00:11:42.659 02:08:57 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:42.659 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:42.659 02:08:57 -- nvmf/common.sh@162 -- # true 00:11:42.659 02:08:57 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:42.659 02:08:57 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:42.659 02:08:57 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:42.659 02:08:57 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:42.659 02:08:57 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:42.659 02:08:57 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:42.659 02:08:57 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:42.659 02:08:57 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:42.659 02:08:57 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:42.659 02:08:57 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:42.659 02:08:57 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:42.659 02:08:57 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:42.659 02:08:57 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:42.659 02:08:57 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:42.659 02:08:57 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:42.659 02:08:57 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:42.659 02:08:57 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:42.659 02:08:57 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:42.918 02:08:57 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:42.918 02:08:57 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:42.918 02:08:57 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:42.918 02:08:57 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:42.918 02:08:57 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:42.918 02:08:57 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:42.918 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:42.918 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:11:42.918 00:11:42.918 --- 10.0.0.2 ping statistics --- 00:11:42.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:42.918 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:11:42.918 02:08:57 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:42.918 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:42.918 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:11:42.918 00:11:42.918 --- 10.0.0.3 ping statistics --- 00:11:42.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:42.918 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:11:42.918 02:08:57 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:42.918 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:42.918 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:11:42.918 00:11:42.918 --- 10.0.0.1 ping statistics --- 00:11:42.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:42.918 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:11:42.918 02:08:57 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:42.918 02:08:57 -- nvmf/common.sh@421 -- # return 0 00:11:42.918 02:08:57 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:42.918 02:08:57 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:42.918 02:08:57 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:42.918 02:08:57 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:42.918 02:08:57 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:42.918 02:08:57 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:42.918 02:08:57 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:42.918 02:08:57 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:11:42.918 02:08:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:42.918 02:08:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:42.918 02:08:57 -- common/autotest_common.sh@10 -- # set +x 00:11:42.918 02:08:57 -- nvmf/common.sh@469 -- # nvmfpid=66684 00:11:42.918 02:08:57 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:42.918 02:08:57 -- nvmf/common.sh@470 -- # waitforlisten 66684 00:11:42.918 02:08:57 -- common/autotest_common.sh@819 -- # '[' -z 66684 ']' 00:11:42.918 02:08:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:42.918 02:08:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:42.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:42.918 02:08:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:42.918 02:08:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:42.918 02:08:57 -- common/autotest_common.sh@10 -- # set +x 00:11:42.918 [2024-05-14 02:08:57.408922] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:42.918 [2024-05-14 02:08:57.409059] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:43.178 [2024-05-14 02:08:57.546962] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:43.178 [2024-05-14 02:08:57.604737] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:43.178 [2024-05-14 02:08:57.605075] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:43.178 [2024-05-14 02:08:57.605227] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:43.178 [2024-05-14 02:08:57.605394] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:43.178 [2024-05-14 02:08:57.605656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:43.178 [2024-05-14 02:08:57.605720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:43.178 [2024-05-14 02:08:57.605929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:44.114 02:08:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:44.114 02:08:58 -- common/autotest_common.sh@852 -- # return 0 00:11:44.114 02:08:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:44.114 02:08:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:44.114 02:08:58 -- common/autotest_common.sh@10 -- # set +x 00:11:44.114 02:08:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:44.114 02:08:58 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:11:44.114 02:08:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:44.114 02:08:58 -- common/autotest_common.sh@10 -- # set +x 00:11:44.114 [2024-05-14 02:08:58.382515] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:44.114 02:08:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:44.114 02:08:58 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:11:44.114 02:08:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:44.114 02:08:58 -- common/autotest_common.sh@10 -- # set +x 00:11:44.114 Malloc0 00:11:44.114 02:08:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:44.114 02:08:58 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:44.114 02:08:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:44.114 02:08:58 -- common/autotest_common.sh@10 -- # set +x 00:11:44.114 Delay0 00:11:44.114 02:08:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:44.114 02:08:58 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:44.114 02:08:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:44.114 02:08:58 -- common/autotest_common.sh@10 -- # set +x 00:11:44.114 02:08:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:44.114 02:08:58 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:11:44.114 02:08:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:44.114 02:08:58 -- common/autotest_common.sh@10 -- # set +x 00:11:44.114 02:08:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:44.114 02:08:58 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:44.114 02:08:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:44.114 02:08:58 -- common/autotest_common.sh@10 -- # set +x 00:11:44.114 [2024-05-14 02:08:58.443231] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:44.114 02:08:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:44.114 02:08:58 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:44.114 02:08:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:44.114 02:08:58 -- common/autotest_common.sh@10 -- # set +x 00:11:44.114 02:08:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:44.114 02:08:58 -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:11:44.114 [2024-05-14 02:08:58.637655] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:46.644 Initializing NVMe Controllers 00:11:46.644 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:11:46.644 controller IO queue size 128 less than required 00:11:46.644 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:11:46.644 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:11:46.644 Initialization complete. Launching workers. 00:11:46.644 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 29244 00:11:46.644 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29305, failed to submit 62 00:11:46.644 success 29244, unsuccess 61, failed 0 00:11:46.644 02:09:00 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:46.644 02:09:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:46.644 02:09:00 -- common/autotest_common.sh@10 -- # set +x 00:11:46.644 02:09:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:46.644 02:09:00 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:11:46.644 02:09:00 -- target/abort.sh@38 -- # nvmftestfini 00:11:46.644 02:09:00 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:46.644 02:09:00 -- nvmf/common.sh@116 -- # sync 00:11:46.644 02:09:00 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:46.644 02:09:00 -- nvmf/common.sh@119 -- # set +e 00:11:46.644 02:09:00 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:46.644 02:09:00 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:46.644 rmmod nvme_tcp 00:11:46.644 rmmod nvme_fabrics 00:11:46.644 rmmod nvme_keyring 00:11:46.644 02:09:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:46.644 02:09:00 -- nvmf/common.sh@123 -- # set -e 00:11:46.644 02:09:00 -- nvmf/common.sh@124 -- # return 0 00:11:46.644 02:09:00 -- nvmf/common.sh@477 -- # '[' -n 66684 ']' 00:11:46.644 02:09:00 -- nvmf/common.sh@478 -- # killprocess 66684 00:11:46.644 02:09:00 -- common/autotest_common.sh@926 -- # '[' -z 66684 ']' 00:11:46.644 02:09:00 -- common/autotest_common.sh@930 -- # kill -0 66684 00:11:46.644 02:09:00 -- common/autotest_common.sh@931 -- # uname 00:11:46.644 02:09:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:46.644 02:09:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 66684 00:11:46.644 02:09:00 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:11:46.644 02:09:00 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:11:46.644 killing process with pid 66684 00:11:46.644 02:09:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 66684' 00:11:46.644 02:09:00 -- common/autotest_common.sh@945 -- # kill 66684 00:11:46.644 02:09:00 -- common/autotest_common.sh@950 -- # wait 66684 00:11:46.644 02:09:01 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:46.644 02:09:01 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:46.644 02:09:01 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:46.644 02:09:01 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:46.644 02:09:01 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:46.644 02:09:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:46.644 02:09:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:46.644 02:09:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:46.644 02:09:01 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:46.644 00:11:46.644 real 0m4.148s 00:11:46.644 user 0m12.128s 00:11:46.644 sys 0m0.909s 00:11:46.644 02:09:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:46.644 ************************************ 00:11:46.644 END TEST nvmf_abort 00:11:46.644 02:09:01 -- common/autotest_common.sh@10 -- # set +x 00:11:46.644 ************************************ 00:11:46.644 02:09:01 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:11:46.644 02:09:01 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:11:46.644 02:09:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:46.644 02:09:01 -- common/autotest_common.sh@10 -- # set +x 00:11:46.644 ************************************ 00:11:46.644 START TEST nvmf_ns_hotplug_stress 00:11:46.644 ************************************ 00:11:46.644 02:09:01 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:11:46.644 * Looking for test storage... 00:11:46.644 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:46.644 02:09:01 -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:46.644 02:09:01 -- nvmf/common.sh@7 -- # uname -s 00:11:46.644 02:09:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:46.644 02:09:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:46.644 02:09:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:46.644 02:09:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:46.644 02:09:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:46.644 02:09:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:46.644 02:09:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:46.644 02:09:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:46.644 02:09:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:46.644 02:09:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:46.644 02:09:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:11:46.644 02:09:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:11:46.644 02:09:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:46.644 02:09:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:46.644 02:09:01 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:46.644 02:09:01 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:46.644 02:09:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:46.644 02:09:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:46.644 02:09:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:46.644 02:09:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.644 02:09:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.644 02:09:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.644 02:09:01 -- paths/export.sh@5 -- # export PATH 00:11:46.644 02:09:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.644 02:09:01 -- nvmf/common.sh@46 -- # : 0 00:11:46.644 02:09:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:46.644 02:09:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:46.644 02:09:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:46.644 02:09:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:46.644 02:09:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:46.644 02:09:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:46.644 02:09:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:46.644 02:09:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:46.644 02:09:01 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:46.644 02:09:01 -- target/ns_hotplug_stress.sh@13 -- # nvmftestinit 00:11:46.644 02:09:01 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:46.644 02:09:01 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:46.644 02:09:01 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:46.644 02:09:01 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:46.644 02:09:01 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:46.644 02:09:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:46.644 02:09:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:46.644 02:09:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:46.644 02:09:01 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:46.644 02:09:01 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:46.644 02:09:01 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:46.644 02:09:01 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:46.645 02:09:01 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:46.645 02:09:01 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:46.645 02:09:01 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:46.645 02:09:01 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:46.645 02:09:01 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:46.645 02:09:01 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:46.645 02:09:01 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:46.645 02:09:01 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:46.645 02:09:01 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:46.645 02:09:01 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:46.645 02:09:01 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:46.645 02:09:01 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:46.645 02:09:01 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:46.645 02:09:01 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:46.645 02:09:01 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:46.645 02:09:01 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:46.645 Cannot find device "nvmf_tgt_br" 00:11:46.645 02:09:01 -- nvmf/common.sh@154 -- # true 00:11:46.645 02:09:01 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:46.645 Cannot find device "nvmf_tgt_br2" 00:11:46.645 02:09:01 -- nvmf/common.sh@155 -- # true 00:11:46.645 02:09:01 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:46.645 02:09:01 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:46.645 Cannot find device "nvmf_tgt_br" 00:11:46.645 02:09:01 -- nvmf/common.sh@157 -- # true 00:11:46.645 02:09:01 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:46.902 Cannot find device "nvmf_tgt_br2" 00:11:46.902 02:09:01 -- nvmf/common.sh@158 -- # true 00:11:46.902 02:09:01 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:46.902 02:09:01 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:46.902 02:09:01 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:46.902 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:46.902 02:09:01 -- nvmf/common.sh@161 -- # true 00:11:46.902 02:09:01 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:46.902 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:46.902 02:09:01 -- nvmf/common.sh@162 -- # true 00:11:46.902 02:09:01 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:46.902 02:09:01 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:46.902 02:09:01 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:46.902 02:09:01 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:46.902 02:09:01 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:46.902 02:09:01 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:46.902 02:09:01 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:46.902 02:09:01 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:46.902 02:09:01 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:46.902 02:09:01 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:46.902 02:09:01 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:46.902 02:09:01 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:46.902 02:09:01 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:46.902 02:09:01 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:46.902 02:09:01 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:46.902 02:09:01 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:46.902 02:09:01 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:46.902 02:09:01 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:46.902 02:09:01 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:46.902 02:09:01 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:46.902 02:09:01 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:46.902 02:09:01 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:46.902 02:09:01 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:46.902 02:09:01 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:46.902 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:46.902 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:11:46.902 00:11:46.902 --- 10.0.0.2 ping statistics --- 00:11:46.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:46.902 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:11:46.902 02:09:01 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:46.902 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:46.902 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:11:46.902 00:11:46.902 --- 10.0.0.3 ping statistics --- 00:11:46.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:46.903 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:11:46.903 02:09:01 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:46.903 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:46.903 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:11:46.903 00:11:46.903 --- 10.0.0.1 ping statistics --- 00:11:46.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:46.903 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:11:46.903 02:09:01 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:46.903 02:09:01 -- nvmf/common.sh@421 -- # return 0 00:11:46.903 02:09:01 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:46.903 02:09:01 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:46.903 02:09:01 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:46.903 02:09:01 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:46.903 02:09:01 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:46.903 02:09:01 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:46.903 02:09:01 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:47.160 02:09:01 -- target/ns_hotplug_stress.sh@14 -- # nvmfappstart -m 0xE 00:11:47.160 02:09:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:47.160 02:09:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:47.160 02:09:01 -- common/autotest_common.sh@10 -- # set +x 00:11:47.160 02:09:01 -- nvmf/common.sh@469 -- # nvmfpid=66944 00:11:47.160 02:09:01 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:47.160 02:09:01 -- nvmf/common.sh@470 -- # waitforlisten 66944 00:11:47.160 02:09:01 -- common/autotest_common.sh@819 -- # '[' -z 66944 ']' 00:11:47.160 02:09:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:47.160 02:09:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:47.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:47.160 02:09:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:47.160 02:09:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:47.160 02:09:01 -- common/autotest_common.sh@10 -- # set +x 00:11:47.160 [2024-05-14 02:09:01.581241] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:47.160 [2024-05-14 02:09:01.581374] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:47.160 [2024-05-14 02:09:01.725355] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:47.417 [2024-05-14 02:09:01.798334] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:47.417 [2024-05-14 02:09:01.798733] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:47.417 [2024-05-14 02:09:01.798911] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:47.417 [2024-05-14 02:09:01.799096] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:47.417 [2024-05-14 02:09:01.799355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:47.417 [2024-05-14 02:09:01.799437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:47.417 [2024-05-14 02:09:01.799561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:48.350 02:09:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:48.350 02:09:02 -- common/autotest_common.sh@852 -- # return 0 00:11:48.350 02:09:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:48.350 02:09:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:48.350 02:09:02 -- common/autotest_common.sh@10 -- # set +x 00:11:48.350 02:09:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:48.350 02:09:02 -- target/ns_hotplug_stress.sh@16 -- # null_size=1000 00:11:48.350 02:09:02 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:48.350 [2024-05-14 02:09:02.878000] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:48.350 02:09:02 -- target/ns_hotplug_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:48.607 02:09:03 -- target/ns_hotplug_stress.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:48.865 [2024-05-14 02:09:03.411797] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:48.865 02:09:03 -- target/ns_hotplug_stress.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:49.129 02:09:03 -- target/ns_hotplug_stress.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:11:49.697 Malloc0 00:11:49.697 02:09:04 -- target/ns_hotplug_stress.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:49.697 Delay0 00:11:49.697 02:09:04 -- target/ns_hotplug_stress.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:49.955 02:09:04 -- target/ns_hotplug_stress.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:11:50.212 NULL1 00:11:50.212 02:09:04 -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:50.779 02:09:05 -- target/ns_hotplug_stress.sh@33 -- # PERF_PID=67075 00:11:50.779 02:09:05 -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:11:50.779 02:09:05 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67075 00:11:50.779 02:09:05 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:51.714 Read completed with error (sct=0, sc=11) 00:11:51.971 02:09:06 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:51.971 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:51.971 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:51.971 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:51.971 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:51.971 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:51.971 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:52.229 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:52.229 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:52.229 02:09:06 -- target/ns_hotplug_stress.sh@40 -- # null_size=1001 00:11:52.229 02:09:06 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:11:52.487 true 00:11:52.487 02:09:06 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67075 00:11:52.487 02:09:06 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:53.421 02:09:07 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:53.421 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:53.421 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:53.421 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:53.421 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:53.421 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:53.421 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:53.421 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:53.421 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:53.421 02:09:07 -- target/ns_hotplug_stress.sh@40 -- # null_size=1002 00:11:53.421 02:09:07 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:11:53.987 true 00:11:53.987 02:09:08 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67075 00:11:53.987 02:09:08 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:54.553 02:09:08 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:54.553 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:54.553 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:54.553 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:54.553 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:54.553 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:54.812 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:54.812 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:54.812 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:54.812 02:09:09 -- target/ns_hotplug_stress.sh@40 -- # null_size=1003 00:11:54.812 02:09:09 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:11:55.069 true 00:11:55.069 02:09:09 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67075 00:11:55.069 02:09:09 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:56.005 02:09:10 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:56.263 02:09:10 -- target/ns_hotplug_stress.sh@40 -- # null_size=1004 00:11:56.263 02:09:10 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:11:56.521 true 00:11:56.521 02:09:10 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67075 00:11:56.522 02:09:10 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:56.779 02:09:11 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:57.036 02:09:11 -- target/ns_hotplug_stress.sh@40 -- # null_size=1005 00:11:57.036 02:09:11 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:11:57.294 true 00:11:57.294 02:09:11 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67075 00:11:57.294 02:09:11 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:57.552 02:09:12 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:57.809 02:09:12 -- target/ns_hotplug_stress.sh@40 -- # null_size=1006 00:11:57.809 02:09:12 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:11:58.067 true 00:11:58.067 02:09:12 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67075 00:11:58.067 02:09:12 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:58.326 02:09:12 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:58.585 02:09:13 -- target/ns_hotplug_stress.sh@40 -- # null_size=1007 00:11:58.585 02:09:13 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:11:58.843 true 00:11:58.843 02:09:13 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67075 00:11:58.843 02:09:13 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:59.778 02:09:14 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:00.037 02:09:14 -- target/ns_hotplug_stress.sh@40 -- # null_size=1008 00:12:00.037 02:09:14 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:12:00.296 true 00:12:00.296 02:09:14 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67075 00:12:00.296 02:09:14 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:00.553 02:09:15 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:00.812 02:09:15 -- target/ns_hotplug_stress.sh@40 -- # null_size=1009 00:12:00.812 02:09:15 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:12:01.069 true 00:12:01.069 02:09:15 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67075 00:12:01.069 02:09:15 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:02.021 02:09:16 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:02.021 02:09:16 -- target/ns_hotplug_stress.sh@40 -- # null_size=1010 00:12:02.021 02:09:16 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:12:02.280 true 00:12:02.280 02:09:16 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67075 00:12:02.280 02:09:16 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:02.539 02:09:17 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:03.105 02:09:17 -- target/ns_hotplug_stress.sh@40 -- # null_size=1011 00:12:03.105 02:09:17 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:12:03.105 true 00:12:03.105 02:09:17 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67075 00:12:03.105 02:09:17 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:03.672 02:09:17 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:03.672 02:09:18 -- target/ns_hotplug_stress.sh@40 -- # null_size=1012 00:12:03.672 02:09:18 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:12:03.931 true 00:12:03.931 02:09:18 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67075 00:12:03.931 02:09:18 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:04.866 02:09:19 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:05.125 02:09:19 -- target/ns_hotplug_stress.sh@40 -- # null_size=1013 00:12:05.125 02:09:19 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:12:05.383 true 00:12:05.383 02:09:19 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67075 00:12:05.383 02:09:19 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:05.641 02:09:20 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:05.898 02:09:20 -- target/ns_hotplug_stress.sh@40 -- # null_size=1014 00:12:05.898 02:09:20 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:12:06.157 true 00:12:06.157 02:09:20 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67075 00:12:06.157 02:09:20 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:06.722 02:09:21 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:06.722 02:09:21 -- target/ns_hotplug_stress.sh@40 -- # null_size=1015 00:12:06.722 02:09:21 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:12:06.979 true 00:12:06.979 02:09:21 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67075 00:12:06.979 02:09:21 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:07.914 02:09:22 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:08.172 02:09:22 -- target/ns_hotplug_stress.sh@40 -- # null_size=1016 00:12:08.172 02:09:22 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:12:08.430 true 00:12:08.430 02:09:22 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67075 00:12:08.430 02:09:22 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:08.691 02:09:23 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:08.691 02:09:23 -- target/ns_hotplug_stress.sh@40 -- # null_size=1017 00:12:08.691 02:09:23 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:12:08.949 true 00:12:08.949 02:09:23 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67075 00:12:08.949 02:09:23 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:09.880 02:09:24 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:10.137 02:09:24 -- target/ns_hotplug_stress.sh@40 -- # null_size=1018 00:12:10.137 02:09:24 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:12:10.395 true 00:12:10.395 02:09:24 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67075 00:12:10.395 02:09:24 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:10.960 02:09:25 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:11.218 02:09:25 -- target/ns_hotplug_stress.sh@40 -- # null_size=1019 00:12:11.218 02:09:25 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:12:11.218 true 00:12:11.475 02:09:25 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67075 00:12:11.475 02:09:25 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:11.734 02:09:26 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:11.734 02:09:26 -- target/ns_hotplug_stress.sh@40 -- # null_size=1020 00:12:11.734 02:09:26 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:12:11.991 true 00:12:12.247 02:09:26 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67075 00:12:12.247 02:09:26 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:12.813 02:09:27 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:12.813 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:13.378 02:09:27 -- target/ns_hotplug_stress.sh@40 -- # null_size=1021 00:12:13.378 02:09:27 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:12:13.378 true 00:12:13.635 02:09:27 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67075 00:12:13.635 02:09:27 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:13.893 02:09:28 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:14.151 02:09:28 -- target/ns_hotplug_stress.sh@40 -- # null_size=1022 00:12:14.151 02:09:28 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:12:14.408 true 00:12:14.408 02:09:28 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67075 00:12:14.408 02:09:28 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:14.667 02:09:29 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:14.925 02:09:29 -- target/ns_hotplug_stress.sh@40 -- # null_size=1023 00:12:14.925 02:09:29 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:12:15.184 true 00:12:15.184 02:09:29 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67075 00:12:15.184 02:09:29 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:15.442 02:09:29 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:15.700 02:09:30 -- target/ns_hotplug_stress.sh@40 -- # null_size=1024 00:12:15.700 02:09:30 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:12:15.958 true 00:12:15.959 02:09:30 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67075 00:12:15.959 02:09:30 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:16.893 02:09:31 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:17.151 02:09:31 -- target/ns_hotplug_stress.sh@40 -- # null_size=1025 00:12:17.151 02:09:31 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:12:17.409 true 00:12:17.409 02:09:31 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67075 00:12:17.409 02:09:31 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:17.975 02:09:32 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:17.975 02:09:32 -- target/ns_hotplug_stress.sh@40 -- # null_size=1026 00:12:17.975 02:09:32 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:12:18.232 true 00:12:18.490 02:09:32 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67075 00:12:18.490 02:09:32 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:18.748 02:09:33 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:19.006 02:09:33 -- target/ns_hotplug_stress.sh@40 -- # null_size=1027 00:12:19.006 02:09:33 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:12:19.319 true 00:12:19.319 02:09:33 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67075 00:12:19.319 02:09:33 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:19.905 02:09:34 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:20.162 02:09:34 -- target/ns_hotplug_stress.sh@40 -- # null_size=1028 00:12:20.162 02:09:34 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:12:20.419 true 00:12:20.419 02:09:34 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67075 00:12:20.419 02:09:34 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:20.677 02:09:35 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:20.936 02:09:35 -- target/ns_hotplug_stress.sh@40 -- # null_size=1029 00:12:20.936 02:09:35 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:12:20.936 Initializing NVMe Controllers 00:12:20.936 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:20.936 Controller IO queue size 128, less than required. 00:12:20.936 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:20.936 Controller IO queue size 128, less than required. 00:12:20.936 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:20.936 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:20.936 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:12:20.936 Initialization complete. Launching workers. 00:12:20.936 ======================================================== 00:12:20.936 Latency(us) 00:12:20.936 Device Information : IOPS MiB/s Average min max 00:12:20.936 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 897.06 0.44 64751.08 3270.97 1066681.53 00:12:20.936 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 9652.56 4.71 13260.34 1715.48 578137.77 00:12:20.936 ======================================================== 00:12:20.936 Total : 10549.63 5.15 17638.73 1715.48 1066681.53 00:12:20.936 00:12:21.195 true 00:12:21.195 02:09:35 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67075 00:12:21.195 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 35: kill: (67075) - No such process 00:12:21.195 02:09:35 -- target/ns_hotplug_stress.sh@44 -- # wait 67075 00:12:21.195 02:09:35 -- target/ns_hotplug_stress.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:12:21.195 02:09:35 -- target/ns_hotplug_stress.sh@48 -- # nvmftestfini 00:12:21.195 02:09:35 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:21.195 02:09:35 -- nvmf/common.sh@116 -- # sync 00:12:21.195 02:09:35 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:21.195 02:09:35 -- nvmf/common.sh@119 -- # set +e 00:12:21.195 02:09:35 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:21.195 02:09:35 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:21.195 rmmod nvme_tcp 00:12:21.195 rmmod nvme_fabrics 00:12:21.195 rmmod nvme_keyring 00:12:21.195 02:09:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:21.195 02:09:35 -- nvmf/common.sh@123 -- # set -e 00:12:21.195 02:09:35 -- nvmf/common.sh@124 -- # return 0 00:12:21.195 02:09:35 -- nvmf/common.sh@477 -- # '[' -n 66944 ']' 00:12:21.195 02:09:35 -- nvmf/common.sh@478 -- # killprocess 66944 00:12:21.195 02:09:35 -- common/autotest_common.sh@926 -- # '[' -z 66944 ']' 00:12:21.195 02:09:35 -- common/autotest_common.sh@930 -- # kill -0 66944 00:12:21.195 02:09:35 -- common/autotest_common.sh@931 -- # uname 00:12:21.195 02:09:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:21.195 02:09:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 66944 00:12:21.195 killing process with pid 66944 00:12:21.195 02:09:35 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:12:21.195 02:09:35 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:12:21.195 02:09:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 66944' 00:12:21.195 02:09:35 -- common/autotest_common.sh@945 -- # kill 66944 00:12:21.195 02:09:35 -- common/autotest_common.sh@950 -- # wait 66944 00:12:21.453 02:09:35 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:21.453 02:09:35 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:21.453 02:09:35 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:21.453 02:09:35 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:21.453 02:09:35 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:21.453 02:09:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:21.453 02:09:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:21.453 02:09:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:21.453 02:09:35 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:21.453 00:12:21.453 real 0m34.862s 00:12:21.453 user 2m30.415s 00:12:21.453 sys 0m7.646s 00:12:21.453 02:09:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:21.453 02:09:35 -- common/autotest_common.sh@10 -- # set +x 00:12:21.453 ************************************ 00:12:21.453 END TEST nvmf_ns_hotplug_stress 00:12:21.453 ************************************ 00:12:21.453 02:09:35 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:21.453 02:09:35 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:21.453 02:09:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:21.453 02:09:35 -- common/autotest_common.sh@10 -- # set +x 00:12:21.453 ************************************ 00:12:21.453 START TEST nvmf_connect_stress 00:12:21.453 ************************************ 00:12:21.453 02:09:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:21.712 * Looking for test storage... 00:12:21.712 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:21.712 02:09:36 -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:21.712 02:09:36 -- nvmf/common.sh@7 -- # uname -s 00:12:21.712 02:09:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:21.712 02:09:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:21.712 02:09:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:21.712 02:09:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:21.712 02:09:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:21.712 02:09:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:21.712 02:09:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:21.712 02:09:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:21.712 02:09:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:21.712 02:09:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:21.712 02:09:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:12:21.712 02:09:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:12:21.712 02:09:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:21.712 02:09:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:21.712 02:09:36 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:21.712 02:09:36 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:21.712 02:09:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:21.712 02:09:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:21.712 02:09:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:21.712 02:09:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.712 02:09:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.712 02:09:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.712 02:09:36 -- paths/export.sh@5 -- # export PATH 00:12:21.712 02:09:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.712 02:09:36 -- nvmf/common.sh@46 -- # : 0 00:12:21.712 02:09:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:21.712 02:09:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:21.712 02:09:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:21.712 02:09:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:21.712 02:09:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:21.712 02:09:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:21.712 02:09:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:21.712 02:09:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:21.712 02:09:36 -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:21.712 02:09:36 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:21.712 02:09:36 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:21.712 02:09:36 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:21.712 02:09:36 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:21.712 02:09:36 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:21.712 02:09:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:21.712 02:09:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:21.712 02:09:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:21.712 02:09:36 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:21.712 02:09:36 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:21.712 02:09:36 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:21.712 02:09:36 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:21.712 02:09:36 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:21.712 02:09:36 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:21.712 02:09:36 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:21.712 02:09:36 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:21.712 02:09:36 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:21.712 02:09:36 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:21.712 02:09:36 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:21.712 02:09:36 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:21.712 02:09:36 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:21.712 02:09:36 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:21.712 02:09:36 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:21.712 02:09:36 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:21.712 02:09:36 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:21.712 02:09:36 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:21.712 02:09:36 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:21.712 02:09:36 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:21.712 Cannot find device "nvmf_tgt_br" 00:12:21.712 02:09:36 -- nvmf/common.sh@154 -- # true 00:12:21.712 02:09:36 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:21.712 Cannot find device "nvmf_tgt_br2" 00:12:21.712 02:09:36 -- nvmf/common.sh@155 -- # true 00:12:21.712 02:09:36 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:21.712 02:09:36 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:21.712 Cannot find device "nvmf_tgt_br" 00:12:21.712 02:09:36 -- nvmf/common.sh@157 -- # true 00:12:21.712 02:09:36 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:21.712 Cannot find device "nvmf_tgt_br2" 00:12:21.712 02:09:36 -- nvmf/common.sh@158 -- # true 00:12:21.712 02:09:36 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:21.712 02:09:36 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:21.712 02:09:36 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:21.712 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:21.712 02:09:36 -- nvmf/common.sh@161 -- # true 00:12:21.712 02:09:36 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:21.712 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:21.712 02:09:36 -- nvmf/common.sh@162 -- # true 00:12:21.712 02:09:36 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:21.712 02:09:36 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:21.712 02:09:36 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:21.712 02:09:36 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:21.712 02:09:36 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:21.712 02:09:36 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:21.712 02:09:36 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:21.712 02:09:36 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:21.712 02:09:36 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:21.971 02:09:36 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:21.971 02:09:36 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:21.971 02:09:36 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:21.971 02:09:36 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:21.971 02:09:36 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:21.971 02:09:36 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:21.971 02:09:36 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:21.971 02:09:36 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:21.971 02:09:36 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:21.971 02:09:36 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:21.971 02:09:36 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:21.971 02:09:36 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:21.971 02:09:36 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:21.971 02:09:36 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:21.971 02:09:36 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:21.971 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:21.971 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:12:21.971 00:12:21.971 --- 10.0.0.2 ping statistics --- 00:12:21.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:21.971 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:12:21.971 02:09:36 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:21.971 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:21.971 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:12:21.971 00:12:21.971 --- 10.0.0.3 ping statistics --- 00:12:21.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:21.971 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:12:21.971 02:09:36 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:21.971 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:21.971 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:12:21.971 00:12:21.971 --- 10.0.0.1 ping statistics --- 00:12:21.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:21.971 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:12:21.971 02:09:36 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:21.971 02:09:36 -- nvmf/common.sh@421 -- # return 0 00:12:21.971 02:09:36 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:21.971 02:09:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:21.971 02:09:36 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:21.971 02:09:36 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:21.971 02:09:36 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:21.971 02:09:36 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:21.971 02:09:36 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:21.971 02:09:36 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:12:21.971 02:09:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:21.971 02:09:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:21.971 02:09:36 -- common/autotest_common.sh@10 -- # set +x 00:12:21.971 02:09:36 -- nvmf/common.sh@469 -- # nvmfpid=68198 00:12:21.971 02:09:36 -- nvmf/common.sh@470 -- # waitforlisten 68198 00:12:21.971 02:09:36 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:21.971 02:09:36 -- common/autotest_common.sh@819 -- # '[' -z 68198 ']' 00:12:21.971 02:09:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:21.971 02:09:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:21.971 02:09:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:21.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:21.971 02:09:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:21.971 02:09:36 -- common/autotest_common.sh@10 -- # set +x 00:12:21.971 [2024-05-14 02:09:36.488640] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:21.971 [2024-05-14 02:09:36.488726] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:22.229 [2024-05-14 02:09:36.624667] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:22.229 [2024-05-14 02:09:36.691892] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:22.229 [2024-05-14 02:09:36.692064] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:22.229 [2024-05-14 02:09:36.692082] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:22.229 [2024-05-14 02:09:36.692093] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:22.229 [2024-05-14 02:09:36.692238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:22.229 [2024-05-14 02:09:36.692676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:22.229 [2024-05-14 02:09:36.692712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:23.165 02:09:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:23.165 02:09:37 -- common/autotest_common.sh@852 -- # return 0 00:12:23.165 02:09:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:23.165 02:09:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:23.165 02:09:37 -- common/autotest_common.sh@10 -- # set +x 00:12:23.165 02:09:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:23.165 02:09:37 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:23.165 02:09:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:23.165 02:09:37 -- common/autotest_common.sh@10 -- # set +x 00:12:23.165 [2024-05-14 02:09:37.486042] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:23.165 02:09:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:23.165 02:09:37 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:23.165 02:09:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:23.165 02:09:37 -- common/autotest_common.sh@10 -- # set +x 00:12:23.165 02:09:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:23.165 02:09:37 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:23.165 02:09:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:23.165 02:09:37 -- common/autotest_common.sh@10 -- # set +x 00:12:23.165 [2024-05-14 02:09:37.503585] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:23.165 02:09:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:23.165 02:09:37 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:23.165 02:09:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:23.165 02:09:37 -- common/autotest_common.sh@10 -- # set +x 00:12:23.165 NULL1 00:12:23.165 02:09:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:23.165 02:09:37 -- target/connect_stress.sh@21 -- # PERF_PID=68250 00:12:23.165 02:09:37 -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:12:23.166 02:09:37 -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:12:23.166 02:09:37 -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:12:23.166 02:09:37 -- target/connect_stress.sh@27 -- # seq 1 20 00:12:23.166 02:09:37 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:23.166 02:09:37 -- target/connect_stress.sh@28 -- # cat 00:12:23.166 02:09:37 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:23.166 02:09:37 -- target/connect_stress.sh@28 -- # cat 00:12:23.166 02:09:37 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:23.166 02:09:37 -- target/connect_stress.sh@28 -- # cat 00:12:23.166 02:09:37 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:23.166 02:09:37 -- target/connect_stress.sh@28 -- # cat 00:12:23.166 02:09:37 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:23.166 02:09:37 -- target/connect_stress.sh@28 -- # cat 00:12:23.166 02:09:37 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:23.166 02:09:37 -- target/connect_stress.sh@28 -- # cat 00:12:23.166 02:09:37 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:23.166 02:09:37 -- target/connect_stress.sh@28 -- # cat 00:12:23.166 02:09:37 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:23.166 02:09:37 -- target/connect_stress.sh@28 -- # cat 00:12:23.166 02:09:37 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:23.166 02:09:37 -- target/connect_stress.sh@28 -- # cat 00:12:23.166 02:09:37 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:23.166 02:09:37 -- target/connect_stress.sh@28 -- # cat 00:12:23.166 02:09:37 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:23.166 02:09:37 -- target/connect_stress.sh@28 -- # cat 00:12:23.166 02:09:37 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:23.166 02:09:37 -- target/connect_stress.sh@28 -- # cat 00:12:23.166 02:09:37 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:23.166 02:09:37 -- target/connect_stress.sh@28 -- # cat 00:12:23.166 02:09:37 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:23.166 02:09:37 -- target/connect_stress.sh@28 -- # cat 00:12:23.166 02:09:37 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:23.166 02:09:37 -- target/connect_stress.sh@28 -- # cat 00:12:23.166 02:09:37 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:23.166 02:09:37 -- target/connect_stress.sh@28 -- # cat 00:12:23.166 02:09:37 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:23.166 02:09:37 -- target/connect_stress.sh@28 -- # cat 00:12:23.166 02:09:37 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:23.166 02:09:37 -- target/connect_stress.sh@28 -- # cat 00:12:23.166 02:09:37 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:23.166 02:09:37 -- target/connect_stress.sh@28 -- # cat 00:12:23.166 02:09:37 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:23.166 02:09:37 -- target/connect_stress.sh@28 -- # cat 00:12:23.166 02:09:37 -- target/connect_stress.sh@34 -- # kill -0 68250 00:12:23.166 02:09:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:23.166 02:09:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:23.166 02:09:37 -- common/autotest_common.sh@10 -- # set +x 00:12:23.424 02:09:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:23.424 02:09:37 -- target/connect_stress.sh@34 -- # kill -0 68250 00:12:23.424 02:09:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:23.424 02:09:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:23.424 02:09:37 -- common/autotest_common.sh@10 -- # set +x 00:12:23.682 02:09:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:23.682 02:09:38 -- target/connect_stress.sh@34 -- # kill -0 68250 00:12:23.682 02:09:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:23.682 02:09:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:23.682 02:09:38 -- common/autotest_common.sh@10 -- # set +x 00:12:24.249 02:09:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:24.249 02:09:38 -- target/connect_stress.sh@34 -- # kill -0 68250 00:12:24.249 02:09:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:24.249 02:09:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:24.249 02:09:38 -- common/autotest_common.sh@10 -- # set +x 00:12:24.507 02:09:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:24.507 02:09:38 -- target/connect_stress.sh@34 -- # kill -0 68250 00:12:24.507 02:09:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:24.507 02:09:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:24.507 02:09:38 -- common/autotest_common.sh@10 -- # set +x 00:12:24.765 02:09:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:24.765 02:09:39 -- target/connect_stress.sh@34 -- # kill -0 68250 00:12:24.765 02:09:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:24.765 02:09:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:24.765 02:09:39 -- common/autotest_common.sh@10 -- # set +x 00:12:25.024 02:09:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:25.024 02:09:39 -- target/connect_stress.sh@34 -- # kill -0 68250 00:12:25.024 02:09:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:25.024 02:09:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:25.024 02:09:39 -- common/autotest_common.sh@10 -- # set +x 00:12:25.282 02:09:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:25.282 02:09:39 -- target/connect_stress.sh@34 -- # kill -0 68250 00:12:25.282 02:09:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:25.282 02:09:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:25.282 02:09:39 -- common/autotest_common.sh@10 -- # set +x 00:12:25.848 02:09:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:25.848 02:09:40 -- target/connect_stress.sh@34 -- # kill -0 68250 00:12:25.848 02:09:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:25.848 02:09:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:25.848 02:09:40 -- common/autotest_common.sh@10 -- # set +x 00:12:26.107 02:09:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:26.107 02:09:40 -- target/connect_stress.sh@34 -- # kill -0 68250 00:12:26.107 02:09:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:26.107 02:09:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.107 02:09:40 -- common/autotest_common.sh@10 -- # set +x 00:12:26.366 02:09:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:26.366 02:09:40 -- target/connect_stress.sh@34 -- # kill -0 68250 00:12:26.366 02:09:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:26.366 02:09:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.366 02:09:40 -- common/autotest_common.sh@10 -- # set +x 00:12:26.624 02:09:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:26.624 02:09:41 -- target/connect_stress.sh@34 -- # kill -0 68250 00:12:26.624 02:09:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:26.624 02:09:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.624 02:09:41 -- common/autotest_common.sh@10 -- # set +x 00:12:26.882 02:09:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:26.882 02:09:41 -- target/connect_stress.sh@34 -- # kill -0 68250 00:12:26.882 02:09:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:26.882 02:09:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.882 02:09:41 -- common/autotest_common.sh@10 -- # set +x 00:12:27.448 02:09:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:27.448 02:09:41 -- target/connect_stress.sh@34 -- # kill -0 68250 00:12:27.448 02:09:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:27.448 02:09:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:27.448 02:09:41 -- common/autotest_common.sh@10 -- # set +x 00:12:27.707 02:09:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:27.707 02:09:42 -- target/connect_stress.sh@34 -- # kill -0 68250 00:12:27.707 02:09:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:27.707 02:09:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:27.707 02:09:42 -- common/autotest_common.sh@10 -- # set +x 00:12:27.965 02:09:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:27.965 02:09:42 -- target/connect_stress.sh@34 -- # kill -0 68250 00:12:27.965 02:09:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:27.965 02:09:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:27.965 02:09:42 -- common/autotest_common.sh@10 -- # set +x 00:12:28.223 02:09:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:28.223 02:09:42 -- target/connect_stress.sh@34 -- # kill -0 68250 00:12:28.223 02:09:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:28.223 02:09:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:28.223 02:09:42 -- common/autotest_common.sh@10 -- # set +x 00:12:28.482 02:09:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:28.482 02:09:43 -- target/connect_stress.sh@34 -- # kill -0 68250 00:12:28.482 02:09:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:28.482 02:09:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:28.482 02:09:43 -- common/autotest_common.sh@10 -- # set +x 00:12:29.048 02:09:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:29.048 02:09:43 -- target/connect_stress.sh@34 -- # kill -0 68250 00:12:29.048 02:09:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:29.048 02:09:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:29.048 02:09:43 -- common/autotest_common.sh@10 -- # set +x 00:12:29.305 02:09:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:29.305 02:09:43 -- target/connect_stress.sh@34 -- # kill -0 68250 00:12:29.305 02:09:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:29.305 02:09:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:29.305 02:09:43 -- common/autotest_common.sh@10 -- # set +x 00:12:29.563 02:09:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:29.563 02:09:44 -- target/connect_stress.sh@34 -- # kill -0 68250 00:12:29.563 02:09:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:29.563 02:09:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:29.563 02:09:44 -- common/autotest_common.sh@10 -- # set +x 00:12:29.822 02:09:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:29.822 02:09:44 -- target/connect_stress.sh@34 -- # kill -0 68250 00:12:29.822 02:09:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:29.822 02:09:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:29.822 02:09:44 -- common/autotest_common.sh@10 -- # set +x 00:12:30.080 02:09:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:30.080 02:09:44 -- target/connect_stress.sh@34 -- # kill -0 68250 00:12:30.080 02:09:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:30.080 02:09:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:30.080 02:09:44 -- common/autotest_common.sh@10 -- # set +x 00:12:30.647 02:09:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:30.647 02:09:44 -- target/connect_stress.sh@34 -- # kill -0 68250 00:12:30.647 02:09:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:30.647 02:09:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:30.647 02:09:44 -- common/autotest_common.sh@10 -- # set +x 00:12:30.909 02:09:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:30.909 02:09:45 -- target/connect_stress.sh@34 -- # kill -0 68250 00:12:30.909 02:09:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:30.909 02:09:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:30.909 02:09:45 -- common/autotest_common.sh@10 -- # set +x 00:12:31.169 02:09:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:31.169 02:09:45 -- target/connect_stress.sh@34 -- # kill -0 68250 00:12:31.169 02:09:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:31.169 02:09:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:31.169 02:09:45 -- common/autotest_common.sh@10 -- # set +x 00:12:31.427 02:09:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:31.427 02:09:45 -- target/connect_stress.sh@34 -- # kill -0 68250 00:12:31.427 02:09:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:31.427 02:09:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:31.427 02:09:45 -- common/autotest_common.sh@10 -- # set +x 00:12:31.685 02:09:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:31.685 02:09:46 -- target/connect_stress.sh@34 -- # kill -0 68250 00:12:31.685 02:09:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:31.685 02:09:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:31.685 02:09:46 -- common/autotest_common.sh@10 -- # set +x 00:12:32.252 02:09:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:32.252 02:09:46 -- target/connect_stress.sh@34 -- # kill -0 68250 00:12:32.252 02:09:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:32.252 02:09:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:32.252 02:09:46 -- common/autotest_common.sh@10 -- # set +x 00:12:32.510 02:09:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:32.510 02:09:46 -- target/connect_stress.sh@34 -- # kill -0 68250 00:12:32.510 02:09:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:32.510 02:09:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:32.510 02:09:46 -- common/autotest_common.sh@10 -- # set +x 00:12:32.768 02:09:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:32.768 02:09:47 -- target/connect_stress.sh@34 -- # kill -0 68250 00:12:32.768 02:09:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:32.768 02:09:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:32.768 02:09:47 -- common/autotest_common.sh@10 -- # set +x 00:12:33.026 02:09:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:33.026 02:09:47 -- target/connect_stress.sh@34 -- # kill -0 68250 00:12:33.026 02:09:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:33.026 02:09:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:33.026 02:09:47 -- common/autotest_common.sh@10 -- # set +x 00:12:33.284 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:33.284 02:09:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:33.284 02:09:47 -- target/connect_stress.sh@34 -- # kill -0 68250 00:12:33.285 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (68250) - No such process 00:12:33.285 02:09:47 -- target/connect_stress.sh@38 -- # wait 68250 00:12:33.285 02:09:47 -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:12:33.285 02:09:47 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:33.285 02:09:47 -- target/connect_stress.sh@43 -- # nvmftestfini 00:12:33.285 02:09:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:33.285 02:09:47 -- nvmf/common.sh@116 -- # sync 00:12:33.543 02:09:47 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:33.543 02:09:47 -- nvmf/common.sh@119 -- # set +e 00:12:33.543 02:09:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:33.543 02:09:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:33.543 rmmod nvme_tcp 00:12:33.543 rmmod nvme_fabrics 00:12:33.543 rmmod nvme_keyring 00:12:33.543 02:09:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:33.543 02:09:47 -- nvmf/common.sh@123 -- # set -e 00:12:33.543 02:09:47 -- nvmf/common.sh@124 -- # return 0 00:12:33.543 02:09:47 -- nvmf/common.sh@477 -- # '[' -n 68198 ']' 00:12:33.543 02:09:47 -- nvmf/common.sh@478 -- # killprocess 68198 00:12:33.543 02:09:47 -- common/autotest_common.sh@926 -- # '[' -z 68198 ']' 00:12:33.543 02:09:47 -- common/autotest_common.sh@930 -- # kill -0 68198 00:12:33.543 02:09:47 -- common/autotest_common.sh@931 -- # uname 00:12:33.543 02:09:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:33.543 02:09:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 68198 00:12:33.543 killing process with pid 68198 00:12:33.543 02:09:47 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:12:33.543 02:09:47 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:12:33.543 02:09:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 68198' 00:12:33.543 02:09:47 -- common/autotest_common.sh@945 -- # kill 68198 00:12:33.543 02:09:47 -- common/autotest_common.sh@950 -- # wait 68198 00:12:33.802 02:09:48 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:33.802 02:09:48 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:33.802 02:09:48 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:33.802 02:09:48 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:33.802 02:09:48 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:33.802 02:09:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:33.802 02:09:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:33.802 02:09:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:33.802 02:09:48 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:33.802 ************************************ 00:12:33.802 END TEST nvmf_connect_stress 00:12:33.802 ************************************ 00:12:33.802 00:12:33.802 real 0m12.174s 00:12:33.802 user 0m40.853s 00:12:33.802 sys 0m3.088s 00:12:33.802 02:09:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:33.802 02:09:48 -- common/autotest_common.sh@10 -- # set +x 00:12:33.802 02:09:48 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:33.802 02:09:48 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:33.802 02:09:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:33.802 02:09:48 -- common/autotest_common.sh@10 -- # set +x 00:12:33.802 ************************************ 00:12:33.802 START TEST nvmf_fused_ordering 00:12:33.802 ************************************ 00:12:33.802 02:09:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:33.802 * Looking for test storage... 00:12:33.802 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:33.802 02:09:48 -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:33.802 02:09:48 -- nvmf/common.sh@7 -- # uname -s 00:12:33.802 02:09:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:33.802 02:09:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:33.802 02:09:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:33.802 02:09:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:33.802 02:09:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:33.802 02:09:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:33.802 02:09:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:33.802 02:09:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:33.802 02:09:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:33.802 02:09:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:33.802 02:09:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:12:33.802 02:09:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:12:33.802 02:09:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:33.802 02:09:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:33.802 02:09:48 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:33.802 02:09:48 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:33.802 02:09:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:33.802 02:09:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:33.802 02:09:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:33.802 02:09:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.802 02:09:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.802 02:09:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.802 02:09:48 -- paths/export.sh@5 -- # export PATH 00:12:33.803 02:09:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.803 02:09:48 -- nvmf/common.sh@46 -- # : 0 00:12:33.803 02:09:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:33.803 02:09:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:33.803 02:09:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:33.803 02:09:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:33.803 02:09:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:33.803 02:09:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:33.803 02:09:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:33.803 02:09:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:33.803 02:09:48 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:12:33.803 02:09:48 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:33.803 02:09:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:33.803 02:09:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:33.803 02:09:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:33.803 02:09:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:33.803 02:09:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:33.803 02:09:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:33.803 02:09:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:33.803 02:09:48 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:33.803 02:09:48 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:33.803 02:09:48 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:33.803 02:09:48 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:33.803 02:09:48 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:33.803 02:09:48 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:33.803 02:09:48 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:33.803 02:09:48 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:33.803 02:09:48 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:33.803 02:09:48 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:33.803 02:09:48 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:33.803 02:09:48 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:33.803 02:09:48 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:33.803 02:09:48 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:33.803 02:09:48 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:33.803 02:09:48 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:33.803 02:09:48 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:33.803 02:09:48 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:33.803 02:09:48 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:33.803 02:09:48 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:33.803 Cannot find device "nvmf_tgt_br" 00:12:33.803 02:09:48 -- nvmf/common.sh@154 -- # true 00:12:33.803 02:09:48 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:33.803 Cannot find device "nvmf_tgt_br2" 00:12:33.803 02:09:48 -- nvmf/common.sh@155 -- # true 00:12:33.803 02:09:48 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:33.803 02:09:48 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:33.803 Cannot find device "nvmf_tgt_br" 00:12:33.803 02:09:48 -- nvmf/common.sh@157 -- # true 00:12:33.803 02:09:48 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:33.803 Cannot find device "nvmf_tgt_br2" 00:12:33.803 02:09:48 -- nvmf/common.sh@158 -- # true 00:12:33.803 02:09:48 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:34.061 02:09:48 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:34.061 02:09:48 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:34.061 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:34.061 02:09:48 -- nvmf/common.sh@161 -- # true 00:12:34.061 02:09:48 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:34.061 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:34.061 02:09:48 -- nvmf/common.sh@162 -- # true 00:12:34.061 02:09:48 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:34.061 02:09:48 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:34.061 02:09:48 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:34.061 02:09:48 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:34.061 02:09:48 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:34.061 02:09:48 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:34.061 02:09:48 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:34.061 02:09:48 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:34.061 02:09:48 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:34.061 02:09:48 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:34.061 02:09:48 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:34.061 02:09:48 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:34.061 02:09:48 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:34.061 02:09:48 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:34.061 02:09:48 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:34.061 02:09:48 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:34.061 02:09:48 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:34.061 02:09:48 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:34.061 02:09:48 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:34.061 02:09:48 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:34.061 02:09:48 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:34.061 02:09:48 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:34.061 02:09:48 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:34.061 02:09:48 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:34.061 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:34.061 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:12:34.061 00:12:34.061 --- 10.0.0.2 ping statistics --- 00:12:34.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:34.061 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:12:34.061 02:09:48 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:34.061 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:34.061 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:12:34.061 00:12:34.061 --- 10.0.0.3 ping statistics --- 00:12:34.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:34.061 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:12:34.061 02:09:48 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:34.061 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:34.061 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:12:34.061 00:12:34.061 --- 10.0.0.1 ping statistics --- 00:12:34.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:34.061 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:12:34.061 02:09:48 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:34.061 02:09:48 -- nvmf/common.sh@421 -- # return 0 00:12:34.061 02:09:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:34.061 02:09:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:34.062 02:09:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:34.062 02:09:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:34.062 02:09:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:34.062 02:09:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:34.062 02:09:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:34.320 02:09:48 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:12:34.320 02:09:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:34.320 02:09:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:34.320 02:09:48 -- common/autotest_common.sh@10 -- # set +x 00:12:34.320 02:09:48 -- nvmf/common.sh@469 -- # nvmfpid=68576 00:12:34.320 02:09:48 -- nvmf/common.sh@470 -- # waitforlisten 68576 00:12:34.320 02:09:48 -- common/autotest_common.sh@819 -- # '[' -z 68576 ']' 00:12:34.320 02:09:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:34.320 02:09:48 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:34.320 02:09:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:34.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:34.320 02:09:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:34.320 02:09:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:34.320 02:09:48 -- common/autotest_common.sh@10 -- # set +x 00:12:34.320 [2024-05-14 02:09:48.733882] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:34.320 [2024-05-14 02:09:48.733990] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:34.320 [2024-05-14 02:09:48.877076] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:34.579 [2024-05-14 02:09:48.944724] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:34.579 [2024-05-14 02:09:48.944899] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:34.579 [2024-05-14 02:09:48.944914] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:34.579 [2024-05-14 02:09:48.944925] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:34.579 [2024-05-14 02:09:48.944962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:35.146 02:09:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:35.146 02:09:49 -- common/autotest_common.sh@852 -- # return 0 00:12:35.146 02:09:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:35.146 02:09:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:35.146 02:09:49 -- common/autotest_common.sh@10 -- # set +x 00:12:35.146 02:09:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:35.146 02:09:49 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:35.146 02:09:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:35.146 02:09:49 -- common/autotest_common.sh@10 -- # set +x 00:12:35.146 [2024-05-14 02:09:49.730235] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:35.405 02:09:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:35.405 02:09:49 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:35.405 02:09:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:35.405 02:09:49 -- common/autotest_common.sh@10 -- # set +x 00:12:35.405 02:09:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:35.405 02:09:49 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:35.405 02:09:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:35.405 02:09:49 -- common/autotest_common.sh@10 -- # set +x 00:12:35.405 [2024-05-14 02:09:49.746322] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:35.405 02:09:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:35.405 02:09:49 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:35.405 02:09:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:35.405 02:09:49 -- common/autotest_common.sh@10 -- # set +x 00:12:35.405 NULL1 00:12:35.405 02:09:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:35.405 02:09:49 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:12:35.405 02:09:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:35.405 02:09:49 -- common/autotest_common.sh@10 -- # set +x 00:12:35.405 02:09:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:35.405 02:09:49 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:35.405 02:09:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:35.405 02:09:49 -- common/autotest_common.sh@10 -- # set +x 00:12:35.405 02:09:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:35.405 02:09:49 -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:35.405 [2024-05-14 02:09:49.798393] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:35.405 [2024-05-14 02:09:49.798449] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68626 ] 00:12:35.972 Attached to nqn.2016-06.io.spdk:cnode1 00:12:35.972 Namespace ID: 1 size: 1GB 00:12:35.972 fused_ordering(0) 00:12:35.972 fused_ordering(1) 00:12:35.972 fused_ordering(2) 00:12:35.972 fused_ordering(3) 00:12:35.972 fused_ordering(4) 00:12:35.972 fused_ordering(5) 00:12:35.972 fused_ordering(6) 00:12:35.972 fused_ordering(7) 00:12:35.972 fused_ordering(8) 00:12:35.972 fused_ordering(9) 00:12:35.972 fused_ordering(10) 00:12:35.972 fused_ordering(11) 00:12:35.972 fused_ordering(12) 00:12:35.972 fused_ordering(13) 00:12:35.972 fused_ordering(14) 00:12:35.972 fused_ordering(15) 00:12:35.972 fused_ordering(16) 00:12:35.972 fused_ordering(17) 00:12:35.972 fused_ordering(18) 00:12:35.972 fused_ordering(19) 00:12:35.972 fused_ordering(20) 00:12:35.972 fused_ordering(21) 00:12:35.972 fused_ordering(22) 00:12:35.972 fused_ordering(23) 00:12:35.972 fused_ordering(24) 00:12:35.972 fused_ordering(25) 00:12:35.972 fused_ordering(26) 00:12:35.972 fused_ordering(27) 00:12:35.972 fused_ordering(28) 00:12:35.972 fused_ordering(29) 00:12:35.972 fused_ordering(30) 00:12:35.972 fused_ordering(31) 00:12:35.972 fused_ordering(32) 00:12:35.972 fused_ordering(33) 00:12:35.972 fused_ordering(34) 00:12:35.972 fused_ordering(35) 00:12:35.972 fused_ordering(36) 00:12:35.972 fused_ordering(37) 00:12:35.972 fused_ordering(38) 00:12:35.972 fused_ordering(39) 00:12:35.972 fused_ordering(40) 00:12:35.972 fused_ordering(41) 00:12:35.972 fused_ordering(42) 00:12:35.972 fused_ordering(43) 00:12:35.972 fused_ordering(44) 00:12:35.972 fused_ordering(45) 00:12:35.972 fused_ordering(46) 00:12:35.972 fused_ordering(47) 00:12:35.972 fused_ordering(48) 00:12:35.972 fused_ordering(49) 00:12:35.972 fused_ordering(50) 00:12:35.972 fused_ordering(51) 00:12:35.972 fused_ordering(52) 00:12:35.972 fused_ordering(53) 00:12:35.972 fused_ordering(54) 00:12:35.972 fused_ordering(55) 00:12:35.972 fused_ordering(56) 00:12:35.972 fused_ordering(57) 00:12:35.972 fused_ordering(58) 00:12:35.972 fused_ordering(59) 00:12:35.972 fused_ordering(60) 00:12:35.972 fused_ordering(61) 00:12:35.972 fused_ordering(62) 00:12:35.972 fused_ordering(63) 00:12:35.972 fused_ordering(64) 00:12:35.972 fused_ordering(65) 00:12:35.972 fused_ordering(66) 00:12:35.972 fused_ordering(67) 00:12:35.972 fused_ordering(68) 00:12:35.972 fused_ordering(69) 00:12:35.972 fused_ordering(70) 00:12:35.972 fused_ordering(71) 00:12:35.972 fused_ordering(72) 00:12:35.972 fused_ordering(73) 00:12:35.972 fused_ordering(74) 00:12:35.972 fused_ordering(75) 00:12:35.972 fused_ordering(76) 00:12:35.972 fused_ordering(77) 00:12:35.972 fused_ordering(78) 00:12:35.972 fused_ordering(79) 00:12:35.972 fused_ordering(80) 00:12:35.972 fused_ordering(81) 00:12:35.972 fused_ordering(82) 00:12:35.972 fused_ordering(83) 00:12:35.972 fused_ordering(84) 00:12:35.972 fused_ordering(85) 00:12:35.972 fused_ordering(86) 00:12:35.972 fused_ordering(87) 00:12:35.972 fused_ordering(88) 00:12:35.972 fused_ordering(89) 00:12:35.972 fused_ordering(90) 00:12:35.972 fused_ordering(91) 00:12:35.972 fused_ordering(92) 00:12:35.972 fused_ordering(93) 00:12:35.972 fused_ordering(94) 00:12:35.972 fused_ordering(95) 00:12:35.972 fused_ordering(96) 00:12:35.972 fused_ordering(97) 00:12:35.972 fused_ordering(98) 00:12:35.972 fused_ordering(99) 00:12:35.972 fused_ordering(100) 00:12:35.972 fused_ordering(101) 00:12:35.972 fused_ordering(102) 00:12:35.972 fused_ordering(103) 00:12:35.972 fused_ordering(104) 00:12:35.972 fused_ordering(105) 00:12:35.972 fused_ordering(106) 00:12:35.972 fused_ordering(107) 00:12:35.972 fused_ordering(108) 00:12:35.972 fused_ordering(109) 00:12:35.972 fused_ordering(110) 00:12:35.972 fused_ordering(111) 00:12:35.972 fused_ordering(112) 00:12:35.972 fused_ordering(113) 00:12:35.972 fused_ordering(114) 00:12:35.972 fused_ordering(115) 00:12:35.972 fused_ordering(116) 00:12:35.972 fused_ordering(117) 00:12:35.972 fused_ordering(118) 00:12:35.972 fused_ordering(119) 00:12:35.972 fused_ordering(120) 00:12:35.972 fused_ordering(121) 00:12:35.972 fused_ordering(122) 00:12:35.972 fused_ordering(123) 00:12:35.972 fused_ordering(124) 00:12:35.972 fused_ordering(125) 00:12:35.972 fused_ordering(126) 00:12:35.972 fused_ordering(127) 00:12:35.972 fused_ordering(128) 00:12:35.972 fused_ordering(129) 00:12:35.973 fused_ordering(130) 00:12:35.973 fused_ordering(131) 00:12:35.973 fused_ordering(132) 00:12:35.973 fused_ordering(133) 00:12:35.973 fused_ordering(134) 00:12:35.973 fused_ordering(135) 00:12:35.973 fused_ordering(136) 00:12:35.973 fused_ordering(137) 00:12:35.973 fused_ordering(138) 00:12:35.973 fused_ordering(139) 00:12:35.973 fused_ordering(140) 00:12:35.973 fused_ordering(141) 00:12:35.973 fused_ordering(142) 00:12:35.973 fused_ordering(143) 00:12:35.973 fused_ordering(144) 00:12:35.973 fused_ordering(145) 00:12:35.973 fused_ordering(146) 00:12:35.973 fused_ordering(147) 00:12:35.973 fused_ordering(148) 00:12:35.973 fused_ordering(149) 00:12:35.973 fused_ordering(150) 00:12:35.973 fused_ordering(151) 00:12:35.973 fused_ordering(152) 00:12:35.973 fused_ordering(153) 00:12:35.973 fused_ordering(154) 00:12:35.973 fused_ordering(155) 00:12:35.973 fused_ordering(156) 00:12:35.973 fused_ordering(157) 00:12:35.973 fused_ordering(158) 00:12:35.973 fused_ordering(159) 00:12:35.973 fused_ordering(160) 00:12:35.973 fused_ordering(161) 00:12:35.973 fused_ordering(162) 00:12:35.973 fused_ordering(163) 00:12:35.973 fused_ordering(164) 00:12:35.973 fused_ordering(165) 00:12:35.973 fused_ordering(166) 00:12:35.973 fused_ordering(167) 00:12:35.973 fused_ordering(168) 00:12:35.973 fused_ordering(169) 00:12:35.973 fused_ordering(170) 00:12:35.973 fused_ordering(171) 00:12:35.973 fused_ordering(172) 00:12:35.973 fused_ordering(173) 00:12:35.973 fused_ordering(174) 00:12:35.973 fused_ordering(175) 00:12:35.973 fused_ordering(176) 00:12:35.973 fused_ordering(177) 00:12:35.973 fused_ordering(178) 00:12:35.973 fused_ordering(179) 00:12:35.973 fused_ordering(180) 00:12:35.973 fused_ordering(181) 00:12:35.973 fused_ordering(182) 00:12:35.973 fused_ordering(183) 00:12:35.973 fused_ordering(184) 00:12:35.973 fused_ordering(185) 00:12:35.973 fused_ordering(186) 00:12:35.973 fused_ordering(187) 00:12:35.973 fused_ordering(188) 00:12:35.973 fused_ordering(189) 00:12:35.973 fused_ordering(190) 00:12:35.973 fused_ordering(191) 00:12:35.973 fused_ordering(192) 00:12:35.973 fused_ordering(193) 00:12:35.973 fused_ordering(194) 00:12:35.973 fused_ordering(195) 00:12:35.973 fused_ordering(196) 00:12:35.973 fused_ordering(197) 00:12:35.973 fused_ordering(198) 00:12:35.973 fused_ordering(199) 00:12:35.973 fused_ordering(200) 00:12:35.973 fused_ordering(201) 00:12:35.973 fused_ordering(202) 00:12:35.973 fused_ordering(203) 00:12:35.973 fused_ordering(204) 00:12:35.973 fused_ordering(205) 00:12:35.973 fused_ordering(206) 00:12:35.973 fused_ordering(207) 00:12:35.973 fused_ordering(208) 00:12:35.973 fused_ordering(209) 00:12:35.973 fused_ordering(210) 00:12:35.973 fused_ordering(211) 00:12:35.973 fused_ordering(212) 00:12:35.973 fused_ordering(213) 00:12:35.973 fused_ordering(214) 00:12:35.973 fused_ordering(215) 00:12:35.973 fused_ordering(216) 00:12:35.973 fused_ordering(217) 00:12:35.973 fused_ordering(218) 00:12:35.973 fused_ordering(219) 00:12:35.973 fused_ordering(220) 00:12:35.973 fused_ordering(221) 00:12:35.973 fused_ordering(222) 00:12:35.973 fused_ordering(223) 00:12:35.973 fused_ordering(224) 00:12:35.973 fused_ordering(225) 00:12:35.973 fused_ordering(226) 00:12:35.973 fused_ordering(227) 00:12:35.973 fused_ordering(228) 00:12:35.973 fused_ordering(229) 00:12:35.973 fused_ordering(230) 00:12:35.973 fused_ordering(231) 00:12:35.973 fused_ordering(232) 00:12:35.973 fused_ordering(233) 00:12:35.973 fused_ordering(234) 00:12:35.973 fused_ordering(235) 00:12:35.973 fused_ordering(236) 00:12:35.973 fused_ordering(237) 00:12:35.973 fused_ordering(238) 00:12:35.973 fused_ordering(239) 00:12:35.973 fused_ordering(240) 00:12:35.973 fused_ordering(241) 00:12:35.973 fused_ordering(242) 00:12:35.973 fused_ordering(243) 00:12:35.973 fused_ordering(244) 00:12:35.973 fused_ordering(245) 00:12:35.973 fused_ordering(246) 00:12:35.973 fused_ordering(247) 00:12:35.973 fused_ordering(248) 00:12:35.973 fused_ordering(249) 00:12:35.973 fused_ordering(250) 00:12:35.973 fused_ordering(251) 00:12:35.973 fused_ordering(252) 00:12:35.973 fused_ordering(253) 00:12:35.973 fused_ordering(254) 00:12:35.973 fused_ordering(255) 00:12:35.973 fused_ordering(256) 00:12:35.973 fused_ordering(257) 00:12:35.973 fused_ordering(258) 00:12:35.973 fused_ordering(259) 00:12:35.973 fused_ordering(260) 00:12:35.973 fused_ordering(261) 00:12:35.973 fused_ordering(262) 00:12:35.973 fused_ordering(263) 00:12:35.973 fused_ordering(264) 00:12:35.973 fused_ordering(265) 00:12:35.973 fused_ordering(266) 00:12:35.973 fused_ordering(267) 00:12:35.973 fused_ordering(268) 00:12:35.973 fused_ordering(269) 00:12:35.973 fused_ordering(270) 00:12:35.973 fused_ordering(271) 00:12:35.973 fused_ordering(272) 00:12:35.973 fused_ordering(273) 00:12:35.973 fused_ordering(274) 00:12:35.973 fused_ordering(275) 00:12:35.973 fused_ordering(276) 00:12:35.973 fused_ordering(277) 00:12:35.973 fused_ordering(278) 00:12:35.973 fused_ordering(279) 00:12:35.973 fused_ordering(280) 00:12:35.973 fused_ordering(281) 00:12:35.973 fused_ordering(282) 00:12:35.973 fused_ordering(283) 00:12:35.973 fused_ordering(284) 00:12:35.973 fused_ordering(285) 00:12:35.973 fused_ordering(286) 00:12:35.973 fused_ordering(287) 00:12:35.973 fused_ordering(288) 00:12:35.973 fused_ordering(289) 00:12:35.973 fused_ordering(290) 00:12:35.973 fused_ordering(291) 00:12:35.973 fused_ordering(292) 00:12:35.973 fused_ordering(293) 00:12:35.973 fused_ordering(294) 00:12:35.973 fused_ordering(295) 00:12:35.973 fused_ordering(296) 00:12:35.973 fused_ordering(297) 00:12:35.973 fused_ordering(298) 00:12:35.973 fused_ordering(299) 00:12:35.973 fused_ordering(300) 00:12:35.973 fused_ordering(301) 00:12:35.973 fused_ordering(302) 00:12:35.973 fused_ordering(303) 00:12:35.973 fused_ordering(304) 00:12:35.973 fused_ordering(305) 00:12:35.973 fused_ordering(306) 00:12:35.973 fused_ordering(307) 00:12:35.973 fused_ordering(308) 00:12:35.973 fused_ordering(309) 00:12:35.973 fused_ordering(310) 00:12:35.973 fused_ordering(311) 00:12:35.973 fused_ordering(312) 00:12:35.973 fused_ordering(313) 00:12:35.973 fused_ordering(314) 00:12:35.973 fused_ordering(315) 00:12:35.973 fused_ordering(316) 00:12:35.973 fused_ordering(317) 00:12:35.973 fused_ordering(318) 00:12:35.973 fused_ordering(319) 00:12:35.973 fused_ordering(320) 00:12:35.973 fused_ordering(321) 00:12:35.973 fused_ordering(322) 00:12:35.973 fused_ordering(323) 00:12:35.973 fused_ordering(324) 00:12:35.973 fused_ordering(325) 00:12:35.973 fused_ordering(326) 00:12:35.973 fused_ordering(327) 00:12:35.973 fused_ordering(328) 00:12:35.973 fused_ordering(329) 00:12:35.973 fused_ordering(330) 00:12:35.973 fused_ordering(331) 00:12:35.973 fused_ordering(332) 00:12:35.973 fused_ordering(333) 00:12:35.973 fused_ordering(334) 00:12:35.973 fused_ordering(335) 00:12:35.973 fused_ordering(336) 00:12:35.973 fused_ordering(337) 00:12:35.973 fused_ordering(338) 00:12:35.973 fused_ordering(339) 00:12:35.973 fused_ordering(340) 00:12:35.973 fused_ordering(341) 00:12:35.973 fused_ordering(342) 00:12:35.973 fused_ordering(343) 00:12:35.973 fused_ordering(344) 00:12:35.973 fused_ordering(345) 00:12:35.973 fused_ordering(346) 00:12:35.973 fused_ordering(347) 00:12:35.973 fused_ordering(348) 00:12:35.973 fused_ordering(349) 00:12:35.973 fused_ordering(350) 00:12:35.973 fused_ordering(351) 00:12:35.973 fused_ordering(352) 00:12:35.973 fused_ordering(353) 00:12:35.973 fused_ordering(354) 00:12:35.973 fused_ordering(355) 00:12:35.973 fused_ordering(356) 00:12:35.973 fused_ordering(357) 00:12:35.973 fused_ordering(358) 00:12:35.973 fused_ordering(359) 00:12:35.973 fused_ordering(360) 00:12:35.973 fused_ordering(361) 00:12:35.973 fused_ordering(362) 00:12:35.973 fused_ordering(363) 00:12:35.973 fused_ordering(364) 00:12:35.973 fused_ordering(365) 00:12:35.973 fused_ordering(366) 00:12:35.973 fused_ordering(367) 00:12:35.973 fused_ordering(368) 00:12:35.973 fused_ordering(369) 00:12:35.973 fused_ordering(370) 00:12:35.973 fused_ordering(371) 00:12:35.973 fused_ordering(372) 00:12:35.973 fused_ordering(373) 00:12:35.973 fused_ordering(374) 00:12:35.973 fused_ordering(375) 00:12:35.973 fused_ordering(376) 00:12:35.973 fused_ordering(377) 00:12:35.973 fused_ordering(378) 00:12:35.973 fused_ordering(379) 00:12:35.973 fused_ordering(380) 00:12:35.973 fused_ordering(381) 00:12:35.973 fused_ordering(382) 00:12:35.973 fused_ordering(383) 00:12:35.973 fused_ordering(384) 00:12:35.973 fused_ordering(385) 00:12:35.973 fused_ordering(386) 00:12:35.973 fused_ordering(387) 00:12:35.973 fused_ordering(388) 00:12:35.973 fused_ordering(389) 00:12:35.973 fused_ordering(390) 00:12:35.973 fused_ordering(391) 00:12:35.973 fused_ordering(392) 00:12:35.973 fused_ordering(393) 00:12:35.973 fused_ordering(394) 00:12:35.973 fused_ordering(395) 00:12:35.973 fused_ordering(396) 00:12:35.973 fused_ordering(397) 00:12:35.973 fused_ordering(398) 00:12:35.973 fused_ordering(399) 00:12:35.973 fused_ordering(400) 00:12:35.973 fused_ordering(401) 00:12:35.973 fused_ordering(402) 00:12:35.973 fused_ordering(403) 00:12:35.973 fused_ordering(404) 00:12:35.973 fused_ordering(405) 00:12:35.973 fused_ordering(406) 00:12:35.973 fused_ordering(407) 00:12:35.973 fused_ordering(408) 00:12:35.973 fused_ordering(409) 00:12:35.973 fused_ordering(410) 00:12:36.539 fused_ordering(411) 00:12:36.539 fused_ordering(412) 00:12:36.539 fused_ordering(413) 00:12:36.539 fused_ordering(414) 00:12:36.539 fused_ordering(415) 00:12:36.539 fused_ordering(416) 00:12:36.539 fused_ordering(417) 00:12:36.539 fused_ordering(418) 00:12:36.539 fused_ordering(419) 00:12:36.539 fused_ordering(420) 00:12:36.539 fused_ordering(421) 00:12:36.539 fused_ordering(422) 00:12:36.539 fused_ordering(423) 00:12:36.539 fused_ordering(424) 00:12:36.539 fused_ordering(425) 00:12:36.539 fused_ordering(426) 00:12:36.539 fused_ordering(427) 00:12:36.539 fused_ordering(428) 00:12:36.539 fused_ordering(429) 00:12:36.539 fused_ordering(430) 00:12:36.539 fused_ordering(431) 00:12:36.539 fused_ordering(432) 00:12:36.539 fused_ordering(433) 00:12:36.539 fused_ordering(434) 00:12:36.539 fused_ordering(435) 00:12:36.539 fused_ordering(436) 00:12:36.539 fused_ordering(437) 00:12:36.539 fused_ordering(438) 00:12:36.539 fused_ordering(439) 00:12:36.539 fused_ordering(440) 00:12:36.539 fused_ordering(441) 00:12:36.539 fused_ordering(442) 00:12:36.539 fused_ordering(443) 00:12:36.539 fused_ordering(444) 00:12:36.539 fused_ordering(445) 00:12:36.539 fused_ordering(446) 00:12:36.539 fused_ordering(447) 00:12:36.539 fused_ordering(448) 00:12:36.539 fused_ordering(449) 00:12:36.539 fused_ordering(450) 00:12:36.539 fused_ordering(451) 00:12:36.539 fused_ordering(452) 00:12:36.540 fused_ordering(453) 00:12:36.540 fused_ordering(454) 00:12:36.540 fused_ordering(455) 00:12:36.540 fused_ordering(456) 00:12:36.540 fused_ordering(457) 00:12:36.540 fused_ordering(458) 00:12:36.540 fused_ordering(459) 00:12:36.540 fused_ordering(460) 00:12:36.540 fused_ordering(461) 00:12:36.540 fused_ordering(462) 00:12:36.540 fused_ordering(463) 00:12:36.540 fused_ordering(464) 00:12:36.540 fused_ordering(465) 00:12:36.540 fused_ordering(466) 00:12:36.540 fused_ordering(467) 00:12:36.540 fused_ordering(468) 00:12:36.540 fused_ordering(469) 00:12:36.540 fused_ordering(470) 00:12:36.540 fused_ordering(471) 00:12:36.540 fused_ordering(472) 00:12:36.540 fused_ordering(473) 00:12:36.540 fused_ordering(474) 00:12:36.540 fused_ordering(475) 00:12:36.540 fused_ordering(476) 00:12:36.540 fused_ordering(477) 00:12:36.540 fused_ordering(478) 00:12:36.540 fused_ordering(479) 00:12:36.540 fused_ordering(480) 00:12:36.540 fused_ordering(481) 00:12:36.540 fused_ordering(482) 00:12:36.540 fused_ordering(483) 00:12:36.540 fused_ordering(484) 00:12:36.540 fused_ordering(485) 00:12:36.540 fused_ordering(486) 00:12:36.540 fused_ordering(487) 00:12:36.540 fused_ordering(488) 00:12:36.540 fused_ordering(489) 00:12:36.540 fused_ordering(490) 00:12:36.540 fused_ordering(491) 00:12:36.540 fused_ordering(492) 00:12:36.540 fused_ordering(493) 00:12:36.540 fused_ordering(494) 00:12:36.540 fused_ordering(495) 00:12:36.540 fused_ordering(496) 00:12:36.540 fused_ordering(497) 00:12:36.540 fused_ordering(498) 00:12:36.540 fused_ordering(499) 00:12:36.540 fused_ordering(500) 00:12:36.540 fused_ordering(501) 00:12:36.540 fused_ordering(502) 00:12:36.540 fused_ordering(503) 00:12:36.540 fused_ordering(504) 00:12:36.540 fused_ordering(505) 00:12:36.540 fused_ordering(506) 00:12:36.540 fused_ordering(507) 00:12:36.540 fused_ordering(508) 00:12:36.540 fused_ordering(509) 00:12:36.540 fused_ordering(510) 00:12:36.540 fused_ordering(511) 00:12:36.540 fused_ordering(512) 00:12:36.540 fused_ordering(513) 00:12:36.540 fused_ordering(514) 00:12:36.540 fused_ordering(515) 00:12:36.540 fused_ordering(516) 00:12:36.540 fused_ordering(517) 00:12:36.540 fused_ordering(518) 00:12:36.540 fused_ordering(519) 00:12:36.540 fused_ordering(520) 00:12:36.540 fused_ordering(521) 00:12:36.540 fused_ordering(522) 00:12:36.540 fused_ordering(523) 00:12:36.540 fused_ordering(524) 00:12:36.540 fused_ordering(525) 00:12:36.540 fused_ordering(526) 00:12:36.540 fused_ordering(527) 00:12:36.540 fused_ordering(528) 00:12:36.540 fused_ordering(529) 00:12:36.540 fused_ordering(530) 00:12:36.540 fused_ordering(531) 00:12:36.540 fused_ordering(532) 00:12:36.540 fused_ordering(533) 00:12:36.540 fused_ordering(534) 00:12:36.540 fused_ordering(535) 00:12:36.540 fused_ordering(536) 00:12:36.540 fused_ordering(537) 00:12:36.540 fused_ordering(538) 00:12:36.540 fused_ordering(539) 00:12:36.540 fused_ordering(540) 00:12:36.540 fused_ordering(541) 00:12:36.540 fused_ordering(542) 00:12:36.540 fused_ordering(543) 00:12:36.540 fused_ordering(544) 00:12:36.540 fused_ordering(545) 00:12:36.540 fused_ordering(546) 00:12:36.540 fused_ordering(547) 00:12:36.540 fused_ordering(548) 00:12:36.540 fused_ordering(549) 00:12:36.540 fused_ordering(550) 00:12:36.540 fused_ordering(551) 00:12:36.540 fused_ordering(552) 00:12:36.540 fused_ordering(553) 00:12:36.540 fused_ordering(554) 00:12:36.540 fused_ordering(555) 00:12:36.540 fused_ordering(556) 00:12:36.540 fused_ordering(557) 00:12:36.540 fused_ordering(558) 00:12:36.540 fused_ordering(559) 00:12:36.540 fused_ordering(560) 00:12:36.540 fused_ordering(561) 00:12:36.540 fused_ordering(562) 00:12:36.540 fused_ordering(563) 00:12:36.540 fused_ordering(564) 00:12:36.540 fused_ordering(565) 00:12:36.540 fused_ordering(566) 00:12:36.540 fused_ordering(567) 00:12:36.540 fused_ordering(568) 00:12:36.540 fused_ordering(569) 00:12:36.540 fused_ordering(570) 00:12:36.540 fused_ordering(571) 00:12:36.540 fused_ordering(572) 00:12:36.540 fused_ordering(573) 00:12:36.540 fused_ordering(574) 00:12:36.540 fused_ordering(575) 00:12:36.540 fused_ordering(576) 00:12:36.540 fused_ordering(577) 00:12:36.540 fused_ordering(578) 00:12:36.540 fused_ordering(579) 00:12:36.540 fused_ordering(580) 00:12:36.540 fused_ordering(581) 00:12:36.540 fused_ordering(582) 00:12:36.540 fused_ordering(583) 00:12:36.540 fused_ordering(584) 00:12:36.540 fused_ordering(585) 00:12:36.540 fused_ordering(586) 00:12:36.540 fused_ordering(587) 00:12:36.540 fused_ordering(588) 00:12:36.540 fused_ordering(589) 00:12:36.540 fused_ordering(590) 00:12:36.540 fused_ordering(591) 00:12:36.540 fused_ordering(592) 00:12:36.540 fused_ordering(593) 00:12:36.540 fused_ordering(594) 00:12:36.540 fused_ordering(595) 00:12:36.540 fused_ordering(596) 00:12:36.540 fused_ordering(597) 00:12:36.540 fused_ordering(598) 00:12:36.540 fused_ordering(599) 00:12:36.540 fused_ordering(600) 00:12:36.540 fused_ordering(601) 00:12:36.540 fused_ordering(602) 00:12:36.540 fused_ordering(603) 00:12:36.540 fused_ordering(604) 00:12:36.540 fused_ordering(605) 00:12:36.540 fused_ordering(606) 00:12:36.540 fused_ordering(607) 00:12:36.540 fused_ordering(608) 00:12:36.540 fused_ordering(609) 00:12:36.540 fused_ordering(610) 00:12:36.540 fused_ordering(611) 00:12:36.540 fused_ordering(612) 00:12:36.540 fused_ordering(613) 00:12:36.540 fused_ordering(614) 00:12:36.540 fused_ordering(615) 00:12:36.798 fused_ordering(616) 00:12:36.798 fused_ordering(617) 00:12:36.798 fused_ordering(618) 00:12:36.798 fused_ordering(619) 00:12:36.798 fused_ordering(620) 00:12:36.798 fused_ordering(621) 00:12:36.798 fused_ordering(622) 00:12:36.798 fused_ordering(623) 00:12:36.798 fused_ordering(624) 00:12:36.798 fused_ordering(625) 00:12:36.798 fused_ordering(626) 00:12:36.798 fused_ordering(627) 00:12:36.798 fused_ordering(628) 00:12:36.798 fused_ordering(629) 00:12:36.798 fused_ordering(630) 00:12:36.798 fused_ordering(631) 00:12:36.798 fused_ordering(632) 00:12:36.798 fused_ordering(633) 00:12:36.798 fused_ordering(634) 00:12:36.798 fused_ordering(635) 00:12:36.798 fused_ordering(636) 00:12:36.798 fused_ordering(637) 00:12:36.798 fused_ordering(638) 00:12:36.798 fused_ordering(639) 00:12:36.798 fused_ordering(640) 00:12:36.798 fused_ordering(641) 00:12:36.798 fused_ordering(642) 00:12:36.798 fused_ordering(643) 00:12:36.798 fused_ordering(644) 00:12:36.798 fused_ordering(645) 00:12:36.798 fused_ordering(646) 00:12:36.798 fused_ordering(647) 00:12:36.798 fused_ordering(648) 00:12:36.798 fused_ordering(649) 00:12:36.798 fused_ordering(650) 00:12:36.798 fused_ordering(651) 00:12:36.798 fused_ordering(652) 00:12:36.798 fused_ordering(653) 00:12:36.798 fused_ordering(654) 00:12:36.798 fused_ordering(655) 00:12:36.798 fused_ordering(656) 00:12:36.798 fused_ordering(657) 00:12:36.798 fused_ordering(658) 00:12:36.798 fused_ordering(659) 00:12:36.798 fused_ordering(660) 00:12:36.798 fused_ordering(661) 00:12:36.798 fused_ordering(662) 00:12:36.798 fused_ordering(663) 00:12:36.798 fused_ordering(664) 00:12:36.798 fused_ordering(665) 00:12:36.798 fused_ordering(666) 00:12:36.798 fused_ordering(667) 00:12:36.798 fused_ordering(668) 00:12:36.798 fused_ordering(669) 00:12:36.798 fused_ordering(670) 00:12:36.798 fused_ordering(671) 00:12:36.798 fused_ordering(672) 00:12:36.798 fused_ordering(673) 00:12:36.798 fused_ordering(674) 00:12:36.798 fused_ordering(675) 00:12:36.798 fused_ordering(676) 00:12:36.798 fused_ordering(677) 00:12:36.798 fused_ordering(678) 00:12:36.798 fused_ordering(679) 00:12:36.798 fused_ordering(680) 00:12:36.798 fused_ordering(681) 00:12:36.798 fused_ordering(682) 00:12:36.798 fused_ordering(683) 00:12:36.798 fused_ordering(684) 00:12:36.798 fused_ordering(685) 00:12:36.798 fused_ordering(686) 00:12:36.798 fused_ordering(687) 00:12:36.798 fused_ordering(688) 00:12:36.798 fused_ordering(689) 00:12:36.798 fused_ordering(690) 00:12:36.798 fused_ordering(691) 00:12:36.798 fused_ordering(692) 00:12:36.798 fused_ordering(693) 00:12:36.798 fused_ordering(694) 00:12:36.798 fused_ordering(695) 00:12:36.798 fused_ordering(696) 00:12:36.798 fused_ordering(697) 00:12:36.798 fused_ordering(698) 00:12:36.798 fused_ordering(699) 00:12:36.798 fused_ordering(700) 00:12:36.798 fused_ordering(701) 00:12:36.798 fused_ordering(702) 00:12:36.799 fused_ordering(703) 00:12:36.799 fused_ordering(704) 00:12:36.799 fused_ordering(705) 00:12:36.799 fused_ordering(706) 00:12:36.799 fused_ordering(707) 00:12:36.799 fused_ordering(708) 00:12:36.799 fused_ordering(709) 00:12:36.799 fused_ordering(710) 00:12:36.799 fused_ordering(711) 00:12:36.799 fused_ordering(712) 00:12:36.799 fused_ordering(713) 00:12:36.799 fused_ordering(714) 00:12:36.799 fused_ordering(715) 00:12:36.799 fused_ordering(716) 00:12:36.799 fused_ordering(717) 00:12:36.799 fused_ordering(718) 00:12:36.799 fused_ordering(719) 00:12:36.799 fused_ordering(720) 00:12:36.799 fused_ordering(721) 00:12:36.799 fused_ordering(722) 00:12:36.799 fused_ordering(723) 00:12:36.799 fused_ordering(724) 00:12:36.799 fused_ordering(725) 00:12:36.799 fused_ordering(726) 00:12:36.799 fused_ordering(727) 00:12:36.799 fused_ordering(728) 00:12:36.799 fused_ordering(729) 00:12:36.799 fused_ordering(730) 00:12:36.799 fused_ordering(731) 00:12:36.799 fused_ordering(732) 00:12:36.799 fused_ordering(733) 00:12:36.799 fused_ordering(734) 00:12:36.799 fused_ordering(735) 00:12:36.799 fused_ordering(736) 00:12:36.799 fused_ordering(737) 00:12:36.799 fused_ordering(738) 00:12:36.799 fused_ordering(739) 00:12:36.799 fused_ordering(740) 00:12:36.799 fused_ordering(741) 00:12:36.799 fused_ordering(742) 00:12:36.799 fused_ordering(743) 00:12:36.799 fused_ordering(744) 00:12:36.799 fused_ordering(745) 00:12:36.799 fused_ordering(746) 00:12:36.799 fused_ordering(747) 00:12:36.799 fused_ordering(748) 00:12:36.799 fused_ordering(749) 00:12:36.799 fused_ordering(750) 00:12:36.799 fused_ordering(751) 00:12:36.799 fused_ordering(752) 00:12:36.799 fused_ordering(753) 00:12:36.799 fused_ordering(754) 00:12:36.799 fused_ordering(755) 00:12:36.799 fused_ordering(756) 00:12:36.799 fused_ordering(757) 00:12:36.799 fused_ordering(758) 00:12:36.799 fused_ordering(759) 00:12:36.799 fused_ordering(760) 00:12:36.799 fused_ordering(761) 00:12:36.799 fused_ordering(762) 00:12:36.799 fused_ordering(763) 00:12:36.799 fused_ordering(764) 00:12:36.799 fused_ordering(765) 00:12:36.799 fused_ordering(766) 00:12:36.799 fused_ordering(767) 00:12:36.799 fused_ordering(768) 00:12:36.799 fused_ordering(769) 00:12:36.799 fused_ordering(770) 00:12:36.799 fused_ordering(771) 00:12:36.799 fused_ordering(772) 00:12:36.799 fused_ordering(773) 00:12:36.799 fused_ordering(774) 00:12:36.799 fused_ordering(775) 00:12:36.799 fused_ordering(776) 00:12:36.799 fused_ordering(777) 00:12:36.799 fused_ordering(778) 00:12:36.799 fused_ordering(779) 00:12:36.799 fused_ordering(780) 00:12:36.799 fused_ordering(781) 00:12:36.799 fused_ordering(782) 00:12:36.799 fused_ordering(783) 00:12:36.799 fused_ordering(784) 00:12:36.799 fused_ordering(785) 00:12:36.799 fused_ordering(786) 00:12:36.799 fused_ordering(787) 00:12:36.799 fused_ordering(788) 00:12:36.799 fused_ordering(789) 00:12:36.799 fused_ordering(790) 00:12:36.799 fused_ordering(791) 00:12:36.799 fused_ordering(792) 00:12:36.799 fused_ordering(793) 00:12:36.799 fused_ordering(794) 00:12:36.799 fused_ordering(795) 00:12:36.799 fused_ordering(796) 00:12:36.799 fused_ordering(797) 00:12:36.799 fused_ordering(798) 00:12:36.799 fused_ordering(799) 00:12:36.799 fused_ordering(800) 00:12:36.799 fused_ordering(801) 00:12:36.799 fused_ordering(802) 00:12:36.799 fused_ordering(803) 00:12:36.799 fused_ordering(804) 00:12:36.799 fused_ordering(805) 00:12:36.799 fused_ordering(806) 00:12:36.799 fused_ordering(807) 00:12:36.799 fused_ordering(808) 00:12:36.799 fused_ordering(809) 00:12:36.799 fused_ordering(810) 00:12:36.799 fused_ordering(811) 00:12:36.799 fused_ordering(812) 00:12:36.799 fused_ordering(813) 00:12:36.799 fused_ordering(814) 00:12:36.799 fused_ordering(815) 00:12:36.799 fused_ordering(816) 00:12:36.799 fused_ordering(817) 00:12:36.799 fused_ordering(818) 00:12:36.799 fused_ordering(819) 00:12:36.799 fused_ordering(820) 00:12:37.732 fused_ordering(821) 00:12:37.732 fused_ordering(822) 00:12:37.732 fused_ordering(823) 00:12:37.732 fused_ordering(824) 00:12:37.732 fused_ordering(825) 00:12:37.732 fused_ordering(826) 00:12:37.732 fused_ordering(827) 00:12:37.732 fused_ordering(828) 00:12:37.732 fused_ordering(829) 00:12:37.732 fused_ordering(830) 00:12:37.732 fused_ordering(831) 00:12:37.732 fused_ordering(832) 00:12:37.732 fused_ordering(833) 00:12:37.732 fused_ordering(834) 00:12:37.732 fused_ordering(835) 00:12:37.732 fused_ordering(836) 00:12:37.732 fused_ordering(837) 00:12:37.732 fused_ordering(838) 00:12:37.732 fused_ordering(839) 00:12:37.732 fused_ordering(840) 00:12:37.732 fused_ordering(841) 00:12:37.732 fused_ordering(842) 00:12:37.732 fused_ordering(843) 00:12:37.732 fused_ordering(844) 00:12:37.732 fused_ordering(845) 00:12:37.732 fused_ordering(846) 00:12:37.732 fused_ordering(847) 00:12:37.732 fused_ordering(848) 00:12:37.732 fused_ordering(849) 00:12:37.732 fused_ordering(850) 00:12:37.732 fused_ordering(851) 00:12:37.732 fused_ordering(852) 00:12:37.732 fused_ordering(853) 00:12:37.732 fused_ordering(854) 00:12:37.732 fused_ordering(855) 00:12:37.732 fused_ordering(856) 00:12:37.732 fused_ordering(857) 00:12:37.732 fused_ordering(858) 00:12:37.732 fused_ordering(859) 00:12:37.732 fused_ordering(860) 00:12:37.732 fused_ordering(861) 00:12:37.732 fused_ordering(862) 00:12:37.732 fused_ordering(863) 00:12:37.732 fused_ordering(864) 00:12:37.732 fused_ordering(865) 00:12:37.732 fused_ordering(866) 00:12:37.732 fused_ordering(867) 00:12:37.732 fused_ordering(868) 00:12:37.732 fused_ordering(869) 00:12:37.732 fused_ordering(870) 00:12:37.732 fused_ordering(871) 00:12:37.732 fused_ordering(872) 00:12:37.732 fused_ordering(873) 00:12:37.732 fused_ordering(874) 00:12:37.732 fused_ordering(875) 00:12:37.732 fused_ordering(876) 00:12:37.732 fused_ordering(877) 00:12:37.732 fused_ordering(878) 00:12:37.732 fused_ordering(879) 00:12:37.732 fused_ordering(880) 00:12:37.732 fused_ordering(881) 00:12:37.732 fused_ordering(882) 00:12:37.732 fused_ordering(883) 00:12:37.732 fused_ordering(884) 00:12:37.732 fused_ordering(885) 00:12:37.732 fused_ordering(886) 00:12:37.732 fused_ordering(887) 00:12:37.732 fused_ordering(888) 00:12:37.732 fused_ordering(889) 00:12:37.732 fused_ordering(890) 00:12:37.732 fused_ordering(891) 00:12:37.732 fused_ordering(892) 00:12:37.732 fused_ordering(893) 00:12:37.732 fused_ordering(894) 00:12:37.732 fused_ordering(895) 00:12:37.732 fused_ordering(896) 00:12:37.732 fused_ordering(897) 00:12:37.732 fused_ordering(898) 00:12:37.732 fused_ordering(899) 00:12:37.732 fused_ordering(900) 00:12:37.732 fused_ordering(901) 00:12:37.732 fused_ordering(902) 00:12:37.732 fused_ordering(903) 00:12:37.732 fused_ordering(904) 00:12:37.732 fused_ordering(905) 00:12:37.732 fused_ordering(906) 00:12:37.732 fused_ordering(907) 00:12:37.732 fused_ordering(908) 00:12:37.732 fused_ordering(909) 00:12:37.732 fused_ordering(910) 00:12:37.732 fused_ordering(911) 00:12:37.732 fused_ordering(912) 00:12:37.732 fused_ordering(913) 00:12:37.732 fused_ordering(914) 00:12:37.732 fused_ordering(915) 00:12:37.732 fused_ordering(916) 00:12:37.732 fused_ordering(917) 00:12:37.732 fused_ordering(918) 00:12:37.732 fused_ordering(919) 00:12:37.732 fused_ordering(920) 00:12:37.732 fused_ordering(921) 00:12:37.732 fused_ordering(922) 00:12:37.732 fused_ordering(923) 00:12:37.732 fused_ordering(924) 00:12:37.732 fused_ordering(925) 00:12:37.732 fused_ordering(926) 00:12:37.732 fused_ordering(927) 00:12:37.732 fused_ordering(928) 00:12:37.732 fused_ordering(929) 00:12:37.732 fused_ordering(930) 00:12:37.732 fused_ordering(931) 00:12:37.732 fused_ordering(932) 00:12:37.732 fused_ordering(933) 00:12:37.732 fused_ordering(934) 00:12:37.732 fused_ordering(935) 00:12:37.732 fused_ordering(936) 00:12:37.732 fused_ordering(937) 00:12:37.732 fused_ordering(938) 00:12:37.733 fused_ordering(939) 00:12:37.733 fused_ordering(940) 00:12:37.733 fused_ordering(941) 00:12:37.733 fused_ordering(942) 00:12:37.733 fused_ordering(943) 00:12:37.733 fused_ordering(944) 00:12:37.733 fused_ordering(945) 00:12:37.733 fused_ordering(946) 00:12:37.733 fused_ordering(947) 00:12:37.733 fused_ordering(948) 00:12:37.733 fused_ordering(949) 00:12:37.733 fused_ordering(950) 00:12:37.733 fused_ordering(951) 00:12:37.733 fused_ordering(952) 00:12:37.733 fused_ordering(953) 00:12:37.733 fused_ordering(954) 00:12:37.733 fused_ordering(955) 00:12:37.733 fused_ordering(956) 00:12:37.733 fused_ordering(957) 00:12:37.733 fused_ordering(958) 00:12:37.733 fused_ordering(959) 00:12:37.733 fused_ordering(960) 00:12:37.733 fused_ordering(961) 00:12:37.733 fused_ordering(962) 00:12:37.733 fused_ordering(963) 00:12:37.733 fused_ordering(964) 00:12:37.733 fused_ordering(965) 00:12:37.733 fused_ordering(966) 00:12:37.733 fused_ordering(967) 00:12:37.733 fused_ordering(968) 00:12:37.733 fused_ordering(969) 00:12:37.733 fused_ordering(970) 00:12:37.733 fused_ordering(971) 00:12:37.733 fused_ordering(972) 00:12:37.733 fused_ordering(973) 00:12:37.733 fused_ordering(974) 00:12:37.733 fused_ordering(975) 00:12:37.733 fused_ordering(976) 00:12:37.733 fused_ordering(977) 00:12:37.733 fused_ordering(978) 00:12:37.733 fused_ordering(979) 00:12:37.733 fused_ordering(980) 00:12:37.733 fused_ordering(981) 00:12:37.733 fused_ordering(982) 00:12:37.733 fused_ordering(983) 00:12:37.733 fused_ordering(984) 00:12:37.733 fused_ordering(985) 00:12:37.733 fused_ordering(986) 00:12:37.733 fused_ordering(987) 00:12:37.733 fused_ordering(988) 00:12:37.733 fused_ordering(989) 00:12:37.733 fused_ordering(990) 00:12:37.733 fused_ordering(991) 00:12:37.733 fused_ordering(992) 00:12:37.733 fused_ordering(993) 00:12:37.733 fused_ordering(994) 00:12:37.733 fused_ordering(995) 00:12:37.733 fused_ordering(996) 00:12:37.733 fused_ordering(997) 00:12:37.733 fused_ordering(998) 00:12:37.733 fused_ordering(999) 00:12:37.733 fused_ordering(1000) 00:12:37.733 fused_ordering(1001) 00:12:37.733 fused_ordering(1002) 00:12:37.733 fused_ordering(1003) 00:12:37.733 fused_ordering(1004) 00:12:37.733 fused_ordering(1005) 00:12:37.733 fused_ordering(1006) 00:12:37.733 fused_ordering(1007) 00:12:37.733 fused_ordering(1008) 00:12:37.733 fused_ordering(1009) 00:12:37.733 fused_ordering(1010) 00:12:37.733 fused_ordering(1011) 00:12:37.733 fused_ordering(1012) 00:12:37.733 fused_ordering(1013) 00:12:37.733 fused_ordering(1014) 00:12:37.733 fused_ordering(1015) 00:12:37.733 fused_ordering(1016) 00:12:37.733 fused_ordering(1017) 00:12:37.733 fused_ordering(1018) 00:12:37.733 fused_ordering(1019) 00:12:37.733 fused_ordering(1020) 00:12:37.733 fused_ordering(1021) 00:12:37.733 fused_ordering(1022) 00:12:37.733 fused_ordering(1023) 00:12:37.733 02:09:51 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:12:37.733 02:09:51 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:12:37.733 02:09:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:37.733 02:09:51 -- nvmf/common.sh@116 -- # sync 00:12:37.733 02:09:52 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:37.733 02:09:52 -- nvmf/common.sh@119 -- # set +e 00:12:37.733 02:09:52 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:37.733 02:09:52 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:37.733 rmmod nvme_tcp 00:12:37.733 rmmod nvme_fabrics 00:12:37.733 rmmod nvme_keyring 00:12:37.733 02:09:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:37.733 02:09:52 -- nvmf/common.sh@123 -- # set -e 00:12:37.733 02:09:52 -- nvmf/common.sh@124 -- # return 0 00:12:37.733 02:09:52 -- nvmf/common.sh@477 -- # '[' -n 68576 ']' 00:12:37.733 02:09:52 -- nvmf/common.sh@478 -- # killprocess 68576 00:12:37.733 02:09:52 -- common/autotest_common.sh@926 -- # '[' -z 68576 ']' 00:12:37.733 02:09:52 -- common/autotest_common.sh@930 -- # kill -0 68576 00:12:37.733 02:09:52 -- common/autotest_common.sh@931 -- # uname 00:12:37.733 02:09:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:37.733 02:09:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 68576 00:12:37.733 02:09:52 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:12:37.733 02:09:52 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:12:37.733 02:09:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 68576' 00:12:37.733 killing process with pid 68576 00:12:37.733 02:09:52 -- common/autotest_common.sh@945 -- # kill 68576 00:12:37.733 02:09:52 -- common/autotest_common.sh@950 -- # wait 68576 00:12:37.733 02:09:52 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:37.733 02:09:52 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:37.733 02:09:52 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:37.733 02:09:52 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:37.733 02:09:52 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:37.733 02:09:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:37.733 02:09:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:37.733 02:09:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:37.991 02:09:52 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:37.991 00:12:37.991 real 0m4.114s 00:12:37.991 user 0m5.015s 00:12:37.991 sys 0m1.342s 00:12:37.991 02:09:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:37.991 02:09:52 -- common/autotest_common.sh@10 -- # set +x 00:12:37.991 ************************************ 00:12:37.991 END TEST nvmf_fused_ordering 00:12:37.991 ************************************ 00:12:37.991 02:09:52 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:12:37.991 02:09:52 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:37.991 02:09:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:37.991 02:09:52 -- common/autotest_common.sh@10 -- # set +x 00:12:37.991 ************************************ 00:12:37.991 START TEST nvmf_delete_subsystem 00:12:37.991 ************************************ 00:12:37.991 02:09:52 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:12:37.991 * Looking for test storage... 00:12:37.991 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:37.991 02:09:52 -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:37.991 02:09:52 -- nvmf/common.sh@7 -- # uname -s 00:12:37.991 02:09:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:37.991 02:09:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:37.991 02:09:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:37.991 02:09:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:37.991 02:09:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:37.991 02:09:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:37.991 02:09:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:37.991 02:09:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:37.991 02:09:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:37.991 02:09:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:37.992 02:09:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:12:37.992 02:09:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:12:37.992 02:09:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:37.992 02:09:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:37.992 02:09:52 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:37.992 02:09:52 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:37.992 02:09:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:37.992 02:09:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:37.992 02:09:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:37.992 02:09:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.992 02:09:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.992 02:09:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.992 02:09:52 -- paths/export.sh@5 -- # export PATH 00:12:37.992 02:09:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.992 02:09:52 -- nvmf/common.sh@46 -- # : 0 00:12:37.992 02:09:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:37.992 02:09:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:37.992 02:09:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:37.992 02:09:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:37.992 02:09:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:37.992 02:09:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:37.992 02:09:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:37.992 02:09:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:37.992 02:09:52 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:12:37.992 02:09:52 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:37.992 02:09:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:37.992 02:09:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:37.992 02:09:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:37.992 02:09:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:37.992 02:09:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:37.992 02:09:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:37.992 02:09:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:37.992 02:09:52 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:37.992 02:09:52 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:37.992 02:09:52 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:37.992 02:09:52 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:37.992 02:09:52 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:37.992 02:09:52 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:37.992 02:09:52 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:37.992 02:09:52 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:37.992 02:09:52 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:37.992 02:09:52 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:37.992 02:09:52 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:37.992 02:09:52 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:37.992 02:09:52 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:37.992 02:09:52 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:37.992 02:09:52 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:37.992 02:09:52 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:37.992 02:09:52 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:37.992 02:09:52 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:37.992 02:09:52 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:37.992 02:09:52 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:37.992 Cannot find device "nvmf_tgt_br" 00:12:37.992 02:09:52 -- nvmf/common.sh@154 -- # true 00:12:37.992 02:09:52 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:37.992 Cannot find device "nvmf_tgt_br2" 00:12:37.992 02:09:52 -- nvmf/common.sh@155 -- # true 00:12:37.992 02:09:52 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:37.992 02:09:52 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:37.992 Cannot find device "nvmf_tgt_br" 00:12:37.992 02:09:52 -- nvmf/common.sh@157 -- # true 00:12:37.992 02:09:52 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:37.992 Cannot find device "nvmf_tgt_br2" 00:12:37.992 02:09:52 -- nvmf/common.sh@158 -- # true 00:12:37.992 02:09:52 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:37.992 02:09:52 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:38.250 02:09:52 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:38.250 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:38.250 02:09:52 -- nvmf/common.sh@161 -- # true 00:12:38.250 02:09:52 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:38.250 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:38.250 02:09:52 -- nvmf/common.sh@162 -- # true 00:12:38.250 02:09:52 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:38.250 02:09:52 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:38.250 02:09:52 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:38.250 02:09:52 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:38.250 02:09:52 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:38.250 02:09:52 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:38.250 02:09:52 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:38.250 02:09:52 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:38.250 02:09:52 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:38.250 02:09:52 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:38.250 02:09:52 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:38.250 02:09:52 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:38.250 02:09:52 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:38.250 02:09:52 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:38.250 02:09:52 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:38.250 02:09:52 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:38.250 02:09:52 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:38.250 02:09:52 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:38.250 02:09:52 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:38.250 02:09:52 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:38.250 02:09:52 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:38.250 02:09:52 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:38.250 02:09:52 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:38.250 02:09:52 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:38.250 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:38.250 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:12:38.250 00:12:38.250 --- 10.0.0.2 ping statistics --- 00:12:38.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.250 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:12:38.250 02:09:52 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:38.250 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:38.250 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:12:38.250 00:12:38.250 --- 10.0.0.3 ping statistics --- 00:12:38.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.250 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:12:38.250 02:09:52 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:38.250 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:38.250 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:12:38.250 00:12:38.250 --- 10.0.0.1 ping statistics --- 00:12:38.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.250 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:12:38.250 02:09:52 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:38.250 02:09:52 -- nvmf/common.sh@421 -- # return 0 00:12:38.250 02:09:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:38.250 02:09:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:38.250 02:09:52 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:38.250 02:09:52 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:38.250 02:09:52 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:38.250 02:09:52 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:38.250 02:09:52 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:38.250 02:09:52 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:12:38.250 02:09:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:38.250 02:09:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:38.250 02:09:52 -- common/autotest_common.sh@10 -- # set +x 00:12:38.250 02:09:52 -- nvmf/common.sh@469 -- # nvmfpid=68844 00:12:38.250 02:09:52 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:12:38.250 02:09:52 -- nvmf/common.sh@470 -- # waitforlisten 68844 00:12:38.250 02:09:52 -- common/autotest_common.sh@819 -- # '[' -z 68844 ']' 00:12:38.250 02:09:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:38.250 02:09:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:38.250 02:09:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:38.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:38.250 02:09:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:38.250 02:09:52 -- common/autotest_common.sh@10 -- # set +x 00:12:38.508 [2024-05-14 02:09:52.888894] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:38.508 [2024-05-14 02:09:52.889011] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:38.508 [2024-05-14 02:09:53.038893] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:38.766 [2024-05-14 02:09:53.107564] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:38.766 [2024-05-14 02:09:53.107733] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:38.766 [2024-05-14 02:09:53.107749] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:38.766 [2024-05-14 02:09:53.107760] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:38.766 [2024-05-14 02:09:53.107897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:38.766 [2024-05-14 02:09:53.107910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.344 02:09:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:39.344 02:09:53 -- common/autotest_common.sh@852 -- # return 0 00:12:39.344 02:09:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:39.344 02:09:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:39.344 02:09:53 -- common/autotest_common.sh@10 -- # set +x 00:12:39.602 02:09:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:39.602 02:09:53 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:39.602 02:09:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:39.602 02:09:53 -- common/autotest_common.sh@10 -- # set +x 00:12:39.602 [2024-05-14 02:09:53.945102] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:39.602 02:09:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:39.602 02:09:53 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:39.602 02:09:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:39.602 02:09:53 -- common/autotest_common.sh@10 -- # set +x 00:12:39.602 02:09:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:39.602 02:09:53 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:39.602 02:09:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:39.602 02:09:53 -- common/autotest_common.sh@10 -- # set +x 00:12:39.602 [2024-05-14 02:09:53.961220] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:39.602 02:09:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:39.602 02:09:53 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:39.602 02:09:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:39.602 02:09:53 -- common/autotest_common.sh@10 -- # set +x 00:12:39.602 NULL1 00:12:39.602 02:09:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:39.602 02:09:53 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:39.602 02:09:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:39.602 02:09:53 -- common/autotest_common.sh@10 -- # set +x 00:12:39.602 Delay0 00:12:39.602 02:09:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:39.602 02:09:53 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:39.602 02:09:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:39.602 02:09:53 -- common/autotest_common.sh@10 -- # set +x 00:12:39.602 02:09:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:39.602 02:09:53 -- target/delete_subsystem.sh@28 -- # perf_pid=68895 00:12:39.602 02:09:53 -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:12:39.602 02:09:53 -- target/delete_subsystem.sh@30 -- # sleep 2 00:12:39.602 [2024-05-14 02:09:54.155794] subsystem.c:1304:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:41.501 02:09:55 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:41.501 02:09:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:41.501 02:09:55 -- common/autotest_common.sh@10 -- # set +x 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 starting I/O failed: -6 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 starting I/O failed: -6 00:12:41.760 Write completed with error (sct=0, sc=8) 00:12:41.760 Write completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 starting I/O failed: -6 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 starting I/O failed: -6 00:12:41.760 Write completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 starting I/O failed: -6 00:12:41.760 Write completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Write completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 starting I/O failed: -6 00:12:41.760 Write completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 starting I/O failed: -6 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 starting I/O failed: -6 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 starting I/O failed: -6 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 starting I/O failed: -6 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 starting I/O failed: -6 00:12:41.760 Write completed with error (sct=0, sc=8) 00:12:41.760 Write completed with error (sct=0, sc=8) 00:12:41.760 [2024-05-14 02:09:56.190373] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1497c10 is same with the state(5) to be set 00:12:41.760 Write completed with error (sct=0, sc=8) 00:12:41.760 Write completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Write completed with error (sct=0, sc=8) 00:12:41.760 Write completed with error (sct=0, sc=8) 00:12:41.760 Write completed with error (sct=0, sc=8) 00:12:41.760 Write completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Write completed with error (sct=0, sc=8) 00:12:41.760 Write completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Write completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Write completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Write completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Write completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Write completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Write completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Write completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Write completed with error (sct=0, sc=8) 00:12:41.760 Write completed with error (sct=0, sc=8) 00:12:41.760 Write completed with error (sct=0, sc=8) 00:12:41.760 Write completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Write completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Write completed with error (sct=0, sc=8) 00:12:41.760 [2024-05-14 02:09:56.193168] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478840 is same with the state(5) to be set 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Write completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 starting I/O failed: -6 00:12:41.760 Write completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Write completed with error (sct=0, sc=8) 00:12:41.760 starting I/O failed: -6 00:12:41.760 Write completed with error (sct=0, sc=8) 00:12:41.760 Write completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Write completed with error (sct=0, sc=8) 00:12:41.760 starting I/O failed: -6 00:12:41.760 Write completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 starting I/O failed: -6 00:12:41.760 Write completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Write completed with error (sct=0, sc=8) 00:12:41.760 Write completed with error (sct=0, sc=8) 00:12:41.760 starting I/O failed: -6 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Write completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 starting I/O failed: -6 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Write completed with error (sct=0, sc=8) 00:12:41.760 starting I/O failed: -6 00:12:41.760 Write completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Write completed with error (sct=0, sc=8) 00:12:41.760 Write completed with error (sct=0, sc=8) 00:12:41.760 starting I/O failed: -6 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Write completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 starting I/O failed: -6 00:12:41.760 Write completed with error (sct=0, sc=8) 00:12:41.760 Write completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 starting I/O failed: -6 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Write completed with error (sct=0, sc=8) 00:12:41.760 Write completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 starting I/O failed: -6 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 [2024-05-14 02:09:56.194113] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fdb20000c00 is same with the state(5) to be set 00:12:41.760 Write completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Write completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Write completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Write completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Write completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Write completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.760 Write completed with error (sct=0, sc=8) 00:12:41.760 Read completed with error (sct=0, sc=8) 00:12:41.761 Write completed with error (sct=0, sc=8) 00:12:41.761 Read completed with error (sct=0, sc=8) 00:12:41.761 Write completed with error (sct=0, sc=8) 00:12:41.761 Read completed with error (sct=0, sc=8) 00:12:41.761 Read completed with error (sct=0, sc=8) 00:12:41.761 Read completed with error (sct=0, sc=8) 00:12:41.761 Read completed with error (sct=0, sc=8) 00:12:41.761 Read completed with error (sct=0, sc=8) 00:12:41.761 Read completed with error (sct=0, sc=8) 00:12:41.761 Read completed with error (sct=0, sc=8) 00:12:41.761 Write completed with error (sct=0, sc=8) 00:12:41.761 Read completed with error (sct=0, sc=8) 00:12:41.761 Read completed with error (sct=0, sc=8) 00:12:41.761 Read completed with error (sct=0, sc=8) 00:12:41.761 Read completed with error (sct=0, sc=8) 00:12:41.761 Read completed with error (sct=0, sc=8) 00:12:41.761 Read completed with error (sct=0, sc=8) 00:12:41.761 Read completed with error (sct=0, sc=8) 00:12:41.761 Read completed with error (sct=0, sc=8) 00:12:41.761 Read completed with error (sct=0, sc=8) 00:12:41.761 Read completed with error (sct=0, sc=8) 00:12:41.761 Write completed with error (sct=0, sc=8) 00:12:41.761 Read completed with error (sct=0, sc=8) 00:12:41.761 Read completed with error (sct=0, sc=8) 00:12:41.761 Read completed with error (sct=0, sc=8) 00:12:41.761 Write completed with error (sct=0, sc=8) 00:12:41.761 Write completed with error (sct=0, sc=8) 00:12:41.761 Read completed with error (sct=0, sc=8) 00:12:41.761 Read completed with error (sct=0, sc=8) 00:12:41.761 Write completed with error (sct=0, sc=8) 00:12:41.761 Write completed with error (sct=0, sc=8) 00:12:41.761 Read completed with error (sct=0, sc=8) 00:12:41.761 Read completed with error (sct=0, sc=8) 00:12:41.761 Write completed with error (sct=0, sc=8) 00:12:42.697 [2024-05-14 02:09:57.170508] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1496f80 is same with the state(5) to be set 00:12:42.697 Write completed with error (sct=0, sc=8) 00:12:42.697 Read completed with error (sct=0, sc=8) 00:12:42.697 Write completed with error (sct=0, sc=8) 00:12:42.697 Read completed with error (sct=0, sc=8) 00:12:42.697 Write completed with error (sct=0, sc=8) 00:12:42.697 Read completed with error (sct=0, sc=8) 00:12:42.697 Write completed with error (sct=0, sc=8) 00:12:42.697 Write completed with error (sct=0, sc=8) 00:12:42.697 Write completed with error (sct=0, sc=8) 00:12:42.697 Read completed with error (sct=0, sc=8) 00:12:42.697 Read completed with error (sct=0, sc=8) 00:12:42.697 Read completed with error (sct=0, sc=8) 00:12:42.697 Write completed with error (sct=0, sc=8) 00:12:42.697 Write completed with error (sct=0, sc=8) 00:12:42.697 Write completed with error (sct=0, sc=8) 00:12:42.697 Read completed with error (sct=0, sc=8) 00:12:42.697 Read completed with error (sct=0, sc=8) 00:12:42.697 Write completed with error (sct=0, sc=8) 00:12:42.697 Write completed with error (sct=0, sc=8) 00:12:42.697 Read completed with error (sct=0, sc=8) 00:12:42.697 Write completed with error (sct=0, sc=8) 00:12:42.697 Read completed with error (sct=0, sc=8) 00:12:42.697 Read completed with error (sct=0, sc=8) 00:12:42.697 Read completed with error (sct=0, sc=8) 00:12:42.697 [2024-05-14 02:09:57.191425] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1496080 is same with the state(5) to be set 00:12:42.697 Read completed with error (sct=0, sc=8) 00:12:42.697 Write completed with error (sct=0, sc=8) 00:12:42.697 Read completed with error (sct=0, sc=8) 00:12:42.697 Read completed with error (sct=0, sc=8) 00:12:42.697 Read completed with error (sct=0, sc=8) 00:12:42.697 Read completed with error (sct=0, sc=8) 00:12:42.697 Write completed with error (sct=0, sc=8) 00:12:42.697 Write completed with error (sct=0, sc=8) 00:12:42.697 Read completed with error (sct=0, sc=8) 00:12:42.697 Read completed with error (sct=0, sc=8) 00:12:42.697 Read completed with error (sct=0, sc=8) 00:12:42.697 Write completed with error (sct=0, sc=8) 00:12:42.697 Read completed with error (sct=0, sc=8) 00:12:42.697 Read completed with error (sct=0, sc=8) 00:12:42.697 Write completed with error (sct=0, sc=8) 00:12:42.697 Read completed with error (sct=0, sc=8) 00:12:42.697 Read completed with error (sct=0, sc=8) 00:12:42.697 Read completed with error (sct=0, sc=8) 00:12:42.697 Write completed with error (sct=0, sc=8) 00:12:42.697 Write completed with error (sct=0, sc=8) 00:12:42.697 Read completed with error (sct=0, sc=8) 00:12:42.697 Read completed with error (sct=0, sc=8) 00:12:42.697 Write completed with error (sct=0, sc=8) 00:12:42.697 [2024-05-14 02:09:57.191880] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478af0 is same with the state(5) to be set 00:12:42.697 Write completed with error (sct=0, sc=8) 00:12:42.697 Write completed with error (sct=0, sc=8) 00:12:42.697 Read completed with error (sct=0, sc=8) 00:12:42.697 Read completed with error (sct=0, sc=8) 00:12:42.697 Read completed with error (sct=0, sc=8) 00:12:42.697 Write completed with error (sct=0, sc=8) 00:12:42.697 Read completed with error (sct=0, sc=8) 00:12:42.697 Read completed with error (sct=0, sc=8) 00:12:42.697 Read completed with error (sct=0, sc=8) 00:12:42.697 Read completed with error (sct=0, sc=8) 00:12:42.697 Read completed with error (sct=0, sc=8) 00:12:42.697 Read completed with error (sct=0, sc=8) 00:12:42.697 Read completed with error (sct=0, sc=8) 00:12:42.697 Read completed with error (sct=0, sc=8) 00:12:42.697 Read completed with error (sct=0, sc=8) 00:12:42.697 Read completed with error (sct=0, sc=8) 00:12:42.697 Read completed with error (sct=0, sc=8) 00:12:42.697 Read completed with error (sct=0, sc=8) 00:12:42.697 Write completed with error (sct=0, sc=8) 00:12:42.697 Read completed with error (sct=0, sc=8) 00:12:42.697 Read completed with error (sct=0, sc=8) 00:12:42.697 Read completed with error (sct=0, sc=8) 00:12:42.697 Read completed with error (sct=0, sc=8) 00:12:42.697 [2024-05-14 02:09:57.194151] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fdb2000bf20 is same with the state(5) to be set 00:12:42.697 Read completed with error (sct=0, sc=8) 00:12:42.697 Read completed with error (sct=0, sc=8) 00:12:42.697 Write completed with error (sct=0, sc=8) 00:12:42.697 Write completed with error (sct=0, sc=8) 00:12:42.697 Read completed with error (sct=0, sc=8) 00:12:42.697 Write completed with error (sct=0, sc=8) 00:12:42.697 Read completed with error (sct=0, sc=8) 00:12:42.697 Write completed with error (sct=0, sc=8) 00:12:42.697 Write completed with error (sct=0, sc=8) 00:12:42.697 Read completed with error (sct=0, sc=8) 00:12:42.697 Write completed with error (sct=0, sc=8) 00:12:42.697 Read completed with error (sct=0, sc=8) 00:12:42.697 Write completed with error (sct=0, sc=8) 00:12:42.697 Read completed with error (sct=0, sc=8) 00:12:42.697 Read completed with error (sct=0, sc=8) 00:12:42.697 Read completed with error (sct=0, sc=8) 00:12:42.697 Read completed with error (sct=0, sc=8) 00:12:42.697 Read completed with error (sct=0, sc=8) 00:12:42.697 Read completed with error (sct=0, sc=8) 00:12:42.697 Write completed with error (sct=0, sc=8) 00:12:42.697 Read completed with error (sct=0, sc=8) 00:12:42.697 Write completed with error (sct=0, sc=8) 00:12:42.697 Read completed with error (sct=0, sc=8) 00:12:42.697 Write completed with error (sct=0, sc=8) 00:12:42.697 Read completed with error (sct=0, sc=8) 00:12:42.697 Write completed with error (sct=0, sc=8) 00:12:42.697 Read completed with error (sct=0, sc=8) 00:12:42.697 Read completed with error (sct=0, sc=8) 00:12:42.697 Read completed with error (sct=0, sc=8) 00:12:42.697 [2024-05-14 02:09:57.194440] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fdb2000c600 is same with the state(5) to be set 00:12:42.697 [2024-05-14 02:09:57.195165] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1496f80 (9): Bad file descriptor 00:12:42.697 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:12:42.697 02:09:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:42.697 02:09:57 -- target/delete_subsystem.sh@34 -- # delay=0 00:12:42.697 02:09:57 -- target/delete_subsystem.sh@35 -- # kill -0 68895 00:12:42.697 02:09:57 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:12:42.697 Initializing NVMe Controllers 00:12:42.697 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:42.697 Controller IO queue size 128, less than required. 00:12:42.697 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:42.697 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:12:42.697 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:12:42.697 Initialization complete. Launching workers. 00:12:42.697 ======================================================== 00:12:42.697 Latency(us) 00:12:42.697 Device Information : IOPS MiB/s Average min max 00:12:42.697 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 168.41 0.08 899119.82 762.94 1011992.78 00:12:42.697 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 166.42 0.08 925547.91 491.57 2002047.17 00:12:42.697 ======================================================== 00:12:42.697 Total : 334.84 0.16 912255.44 491.57 2002047.17 00:12:42.697 00:12:43.264 02:09:57 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:12:43.264 02:09:57 -- target/delete_subsystem.sh@35 -- # kill -0 68895 00:12:43.264 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (68895) - No such process 00:12:43.264 02:09:57 -- target/delete_subsystem.sh@45 -- # NOT wait 68895 00:12:43.264 02:09:57 -- common/autotest_common.sh@640 -- # local es=0 00:12:43.264 02:09:57 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 68895 00:12:43.264 02:09:57 -- common/autotest_common.sh@628 -- # local arg=wait 00:12:43.264 02:09:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:43.264 02:09:57 -- common/autotest_common.sh@632 -- # type -t wait 00:12:43.264 02:09:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:43.264 02:09:57 -- common/autotest_common.sh@643 -- # wait 68895 00:12:43.264 02:09:57 -- common/autotest_common.sh@643 -- # es=1 00:12:43.264 02:09:57 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:43.264 02:09:57 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:43.264 02:09:57 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:43.264 02:09:57 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:43.264 02:09:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.264 02:09:57 -- common/autotest_common.sh@10 -- # set +x 00:12:43.264 02:09:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.264 02:09:57 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:43.264 02:09:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.264 02:09:57 -- common/autotest_common.sh@10 -- # set +x 00:12:43.264 [2024-05-14 02:09:57.722598] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:43.264 02:09:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.264 02:09:57 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:43.264 02:09:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.264 02:09:57 -- common/autotest_common.sh@10 -- # set +x 00:12:43.264 02:09:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.264 02:09:57 -- target/delete_subsystem.sh@54 -- # perf_pid=68941 00:12:43.264 02:09:57 -- target/delete_subsystem.sh@56 -- # delay=0 00:12:43.264 02:09:57 -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:12:43.264 02:09:57 -- target/delete_subsystem.sh@57 -- # kill -0 68941 00:12:43.264 02:09:57 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:43.522 [2024-05-14 02:09:57.900168] subsystem.c:1304:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:43.780 02:09:58 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:43.781 02:09:58 -- target/delete_subsystem.sh@57 -- # kill -0 68941 00:12:43.781 02:09:58 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:44.346 02:09:58 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:44.346 02:09:58 -- target/delete_subsystem.sh@57 -- # kill -0 68941 00:12:44.346 02:09:58 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:44.912 02:09:59 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:44.912 02:09:59 -- target/delete_subsystem.sh@57 -- # kill -0 68941 00:12:44.912 02:09:59 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:45.170 02:09:59 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:45.170 02:09:59 -- target/delete_subsystem.sh@57 -- # kill -0 68941 00:12:45.170 02:09:59 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:45.735 02:10:00 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:45.735 02:10:00 -- target/delete_subsystem.sh@57 -- # kill -0 68941 00:12:45.735 02:10:00 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:46.301 02:10:00 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:46.301 02:10:00 -- target/delete_subsystem.sh@57 -- # kill -0 68941 00:12:46.301 02:10:00 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:46.558 Initializing NVMe Controllers 00:12:46.558 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:46.558 Controller IO queue size 128, less than required. 00:12:46.558 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:46.558 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:12:46.558 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:12:46.558 Initialization complete. Launching workers. 00:12:46.558 ======================================================== 00:12:46.558 Latency(us) 00:12:46.558 Device Information : IOPS MiB/s Average min max 00:12:46.558 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003347.63 1000137.33 1011037.42 00:12:46.558 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005634.65 1000191.32 1014080.69 00:12:46.558 ======================================================== 00:12:46.558 Total : 256.00 0.12 1004491.14 1000137.33 1014080.69 00:12:46.558 00:12:46.816 02:10:01 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:46.816 02:10:01 -- target/delete_subsystem.sh@57 -- # kill -0 68941 00:12:46.816 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (68941) - No such process 00:12:46.816 02:10:01 -- target/delete_subsystem.sh@67 -- # wait 68941 00:12:46.816 02:10:01 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:12:46.816 02:10:01 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:12:46.816 02:10:01 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:46.816 02:10:01 -- nvmf/common.sh@116 -- # sync 00:12:46.816 02:10:01 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:46.816 02:10:01 -- nvmf/common.sh@119 -- # set +e 00:12:46.816 02:10:01 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:46.816 02:10:01 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:46.816 rmmod nvme_tcp 00:12:46.816 rmmod nvme_fabrics 00:12:46.816 rmmod nvme_keyring 00:12:46.816 02:10:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:46.817 02:10:01 -- nvmf/common.sh@123 -- # set -e 00:12:46.817 02:10:01 -- nvmf/common.sh@124 -- # return 0 00:12:46.817 02:10:01 -- nvmf/common.sh@477 -- # '[' -n 68844 ']' 00:12:46.817 02:10:01 -- nvmf/common.sh@478 -- # killprocess 68844 00:12:46.817 02:10:01 -- common/autotest_common.sh@926 -- # '[' -z 68844 ']' 00:12:46.817 02:10:01 -- common/autotest_common.sh@930 -- # kill -0 68844 00:12:46.817 02:10:01 -- common/autotest_common.sh@931 -- # uname 00:12:46.817 02:10:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:46.817 02:10:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 68844 00:12:46.817 killing process with pid 68844 00:12:46.817 02:10:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:46.817 02:10:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:46.817 02:10:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 68844' 00:12:46.817 02:10:01 -- common/autotest_common.sh@945 -- # kill 68844 00:12:46.817 02:10:01 -- common/autotest_common.sh@950 -- # wait 68844 00:12:47.074 02:10:01 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:47.074 02:10:01 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:47.074 02:10:01 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:47.074 02:10:01 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:47.074 02:10:01 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:47.074 02:10:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:47.074 02:10:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:47.074 02:10:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:47.074 02:10:01 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:47.074 00:12:47.074 real 0m9.246s 00:12:47.074 user 0m28.708s 00:12:47.074 sys 0m1.448s 00:12:47.074 ************************************ 00:12:47.074 END TEST nvmf_delete_subsystem 00:12:47.074 ************************************ 00:12:47.074 02:10:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:47.074 02:10:01 -- common/autotest_common.sh@10 -- # set +x 00:12:47.332 02:10:01 -- nvmf/nvmf.sh@36 -- # [[ 0 -eq 1 ]] 00:12:47.332 02:10:01 -- nvmf/nvmf.sh@39 -- # [[ 1 -eq 1 ]] 00:12:47.332 02:10:01 -- nvmf/nvmf.sh@40 -- # run_test nvmf_vfio_user /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:47.332 02:10:01 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:47.332 02:10:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:47.332 02:10:01 -- common/autotest_common.sh@10 -- # set +x 00:12:47.332 ************************************ 00:12:47.332 START TEST nvmf_vfio_user 00:12:47.332 ************************************ 00:12:47.332 02:10:01 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:47.332 * Looking for test storage... 00:12:47.332 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:47.332 02:10:01 -- target/nvmf_vfio_user.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:47.332 02:10:01 -- nvmf/common.sh@7 -- # uname -s 00:12:47.332 02:10:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:47.332 02:10:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:47.332 02:10:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:47.332 02:10:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:47.332 02:10:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:47.332 02:10:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:47.332 02:10:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:47.332 02:10:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:47.332 02:10:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:47.332 02:10:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:47.332 02:10:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:12:47.332 02:10:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:12:47.332 02:10:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:47.332 02:10:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:47.332 02:10:01 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:47.332 02:10:01 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:47.332 02:10:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:47.332 02:10:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:47.332 02:10:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:47.332 02:10:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.332 02:10:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.332 02:10:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.332 02:10:01 -- paths/export.sh@5 -- # export PATH 00:12:47.332 02:10:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.332 02:10:01 -- nvmf/common.sh@46 -- # : 0 00:12:47.332 02:10:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:47.332 02:10:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:47.332 02:10:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:47.332 02:10:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:47.332 02:10:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:47.332 02:10:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:47.332 02:10:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:47.332 02:10:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:47.332 02:10:01 -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:47.332 02:10:01 -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:47.332 02:10:01 -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:12:47.332 02:10:01 -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:47.332 02:10:01 -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:47.332 02:10:01 -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:47.332 02:10:01 -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:12:47.332 02:10:01 -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:12:47.332 02:10:01 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:12:47.332 02:10:01 -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:12:47.332 02:10:01 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=69064 00:12:47.332 Process pid: 69064 00:12:47.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:47.332 02:10:01 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 69064' 00:12:47.332 02:10:01 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:47.332 02:10:01 -- target/nvmf_vfio_user.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:12:47.332 02:10:01 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 69064 00:12:47.332 02:10:01 -- common/autotest_common.sh@819 -- # '[' -z 69064 ']' 00:12:47.332 02:10:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:47.332 02:10:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:47.332 02:10:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:47.333 02:10:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:47.333 02:10:01 -- common/autotest_common.sh@10 -- # set +x 00:12:47.333 [2024-05-14 02:10:01.831744] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:47.333 [2024-05-14 02:10:01.831881] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:47.591 [2024-05-14 02:10:02.010038] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:47.591 [2024-05-14 02:10:02.101730] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:47.591 [2024-05-14 02:10:02.102406] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:47.591 [2024-05-14 02:10:02.102573] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:47.591 [2024-05-14 02:10:02.102747] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:47.591 [2024-05-14 02:10:02.103054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:47.591 [2024-05-14 02:10:02.103167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:47.591 [2024-05-14 02:10:02.103293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:47.591 [2024-05-14 02:10:02.103302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.525 02:10:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:48.525 02:10:02 -- common/autotest_common.sh@852 -- # return 0 00:12:48.525 02:10:02 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:49.458 02:10:03 -- target/nvmf_vfio_user.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:12:49.715 02:10:04 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:49.715 02:10:04 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:49.715 02:10:04 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:49.715 02:10:04 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:49.715 02:10:04 -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:49.973 Malloc1 00:12:49.973 02:10:04 -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:50.231 02:10:04 -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:50.488 02:10:04 -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:50.746 02:10:05 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:50.746 02:10:05 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:50.746 02:10:05 -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:51.003 Malloc2 00:12:51.003 02:10:05 -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:51.260 02:10:05 -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:51.518 02:10:05 -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:51.775 02:10:06 -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:12:51.775 02:10:06 -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:12:51.775 02:10:06 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:51.775 02:10:06 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:51.775 02:10:06 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:12:51.775 02:10:06 -- target/nvmf_vfio_user.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:51.775 [2024-05-14 02:10:06.234334] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:51.775 [2024-05-14 02:10:06.234403] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69200 ] 00:12:52.035 [2024-05-14 02:10:06.379253] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:12:52.035 [2024-05-14 02:10:06.388129] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:52.035 [2024-05-14 02:10:06.388175] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f0449ef2000 00:12:52.035 [2024-05-14 02:10:06.389134] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:52.035 [2024-05-14 02:10:06.390122] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:52.035 [2024-05-14 02:10:06.391121] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:52.035 [2024-05-14 02:10:06.392123] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:52.035 [2024-05-14 02:10:06.393129] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:52.035 [2024-05-14 02:10:06.394137] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:52.035 [2024-05-14 02:10:06.395133] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:52.035 [2024-05-14 02:10:06.396138] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:52.035 [2024-05-14 02:10:06.397141] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:52.035 [2024-05-14 02:10:06.397179] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f0449507000 00:12:52.035 [2024-05-14 02:10:06.398574] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:52.035 [2024-05-14 02:10:06.418467] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:12:52.035 [2024-05-14 02:10:06.418523] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:12:52.035 [2024-05-14 02:10:06.421243] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:52.035 [2024-05-14 02:10:06.421327] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:52.035 [2024-05-14 02:10:06.421451] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:12:52.035 [2024-05-14 02:10:06.421490] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:12:52.035 [2024-05-14 02:10:06.421503] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:12:52.035 [2024-05-14 02:10:06.422227] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:12:52.035 [2024-05-14 02:10:06.422269] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:12:52.035 [2024-05-14 02:10:06.422291] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:12:52.035 [2024-05-14 02:10:06.423228] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:52.035 [2024-05-14 02:10:06.423269] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:12:52.035 [2024-05-14 02:10:06.423291] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:12:52.035 [2024-05-14 02:10:06.424224] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:12:52.035 [2024-05-14 02:10:06.424259] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:52.035 [2024-05-14 02:10:06.425241] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:12:52.035 [2024-05-14 02:10:06.425274] nvme_ctrlr.c:3736:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:12:52.035 [2024-05-14 02:10:06.425286] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:12:52.035 [2024-05-14 02:10:06.425303] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:52.035 [2024-05-14 02:10:06.425415] nvme_ctrlr.c:3929:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:12:52.035 [2024-05-14 02:10:06.425433] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:52.035 [2024-05-14 02:10:06.425444] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:12:52.035 [2024-05-14 02:10:06.426249] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:12:52.035 [2024-05-14 02:10:06.427250] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:12:52.035 [2024-05-14 02:10:06.428253] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:52.035 [2024-05-14 02:10:06.429337] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:52.035 [2024-05-14 02:10:06.430268] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:12:52.035 [2024-05-14 02:10:06.430302] nvme_ctrlr.c:3771:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:52.035 [2024-05-14 02:10:06.430315] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:12:52.035 [2024-05-14 02:10:06.430357] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:12:52.035 [2024-05-14 02:10:06.430377] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:12:52.035 [2024-05-14 02:10:06.430408] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:52.035 [2024-05-14 02:10:06.430420] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:52.035 [2024-05-14 02:10:06.430446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:52.035 [2024-05-14 02:10:06.430504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:52.035 [2024-05-14 02:10:06.430527] nvme_ctrlr.c:1971:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:12:52.035 [2024-05-14 02:10:06.430542] nvme_ctrlr.c:1975:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:12:52.035 [2024-05-14 02:10:06.430551] nvme_ctrlr.c:1978:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:12:52.035 [2024-05-14 02:10:06.430560] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:52.035 [2024-05-14 02:10:06.430570] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:12:52.035 [2024-05-14 02:10:06.430580] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:12:52.035 [2024-05-14 02:10:06.430590] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:12:52.035 [2024-05-14 02:10:06.430612] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:12:52.035 [2024-05-14 02:10:06.430633] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:52.036 [2024-05-14 02:10:06.430649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:52.036 [2024-05-14 02:10:06.430667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:52.036 [2024-05-14 02:10:06.430682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:52.036 [2024-05-14 02:10:06.430694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:52.036 [2024-05-14 02:10:06.430708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:52.036 [2024-05-14 02:10:06.430717] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:12:52.036 [2024-05-14 02:10:06.430738] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:52.036 [2024-05-14 02:10:06.430755] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:52.036 [2024-05-14 02:10:06.430784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:52.036 [2024-05-14 02:10:06.430800] nvme_ctrlr.c:2877:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:12:52.036 [2024-05-14 02:10:06.430811] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:52.036 [2024-05-14 02:10:06.430825] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:12:52.036 [2024-05-14 02:10:06.430841] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:12:52.036 [2024-05-14 02:10:06.430857] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:52.036 [2024-05-14 02:10:06.430884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:52.036 [2024-05-14 02:10:06.430950] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:12:52.036 [2024-05-14 02:10:06.430972] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:12:52.036 [2024-05-14 02:10:06.430987] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:52.036 [2024-05-14 02:10:06.430996] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:52.036 [2024-05-14 02:10:06.431007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:52.036 [2024-05-14 02:10:06.431025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:52.036 [2024-05-14 02:10:06.431053] nvme_ctrlr.c:4542:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:12:52.036 [2024-05-14 02:10:06.431075] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:12:52.036 [2024-05-14 02:10:06.431093] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:12:52.036 [2024-05-14 02:10:06.431109] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:52.036 [2024-05-14 02:10:06.431118] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:52.036 [2024-05-14 02:10:06.431129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:52.036 [2024-05-14 02:10:06.431157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:52.036 [2024-05-14 02:10:06.431185] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:52.036 [2024-05-14 02:10:06.431203] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:52.036 [2024-05-14 02:10:06.431217] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:52.036 [2024-05-14 02:10:06.431225] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:52.036 [2024-05-14 02:10:06.431236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:52.036 [2024-05-14 02:10:06.431262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:52.036 [2024-05-14 02:10:06.431280] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:52.036 [2024-05-14 02:10:06.431293] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:12:52.036 [2024-05-14 02:10:06.431310] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:12:52.036 [2024-05-14 02:10:06.431322] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:52.036 [2024-05-14 02:10:06.431331] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:12:52.036 [2024-05-14 02:10:06.431339] nvme_ctrlr.c:2977:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:12:52.036 [2024-05-14 02:10:06.431347] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:12:52.036 [2024-05-14 02:10:06.431357] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:12:52.036 [2024-05-14 02:10:06.431391] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:52.036 [2024-05-14 02:10:06.431414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:52.036 [2024-05-14 02:10:06.431438] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:52.036 [2024-05-14 02:10:06.431453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:52.036 [2024-05-14 02:10:06.431473] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:52.036 [2024-05-14 02:10:06.431486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:52.036 [2024-05-14 02:10:06.431506] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:52.036 [2024-05-14 02:10:06.431529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:52.036 [2024-05-14 02:10:06.431552] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:52.036 [2024-05-14 02:10:06.431562] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:52.036 [2024-05-14 02:10:06.431569] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:52.036 [2024-05-14 02:10:06.431575] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:52.036 [2024-05-14 02:10:06.431586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:52.036 [2024-05-14 02:10:06.431601] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:52.036 [2024-05-14 02:10:06.431609] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:52.036 [2024-05-14 02:10:06.431619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:52.036 [2024-05-14 02:10:06.431633] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:52.036 [2024-05-14 02:10:06.431643] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:52.036 [2024-05-14 02:10:06.431654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:52.036 [2024-05-14 02:10:06.431669] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:52.036 [2024-05-14 02:10:06.431678] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:52.036 [2024-05-14 02:10:06.431689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:52.036 [2024-05-14 02:10:06.431702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:52.036 [2024-05-14 02:10:06.431731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:52.036 [2024-05-14 02:10:06.431751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:52.036 [2024-05-14 02:10:06.431781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:52.036 ===================================================== 00:12:52.036 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:52.036 ===================================================== 00:12:52.036 Controller Capabilities/Features 00:12:52.036 ================================ 00:12:52.036 Vendor ID: 4e58 00:12:52.036 Subsystem Vendor ID: 4e58 00:12:52.036 Serial Number: SPDK1 00:12:52.036 Model Number: SPDK bdev Controller 00:12:52.036 Firmware Version: 24.01.1 00:12:52.036 Recommended Arb Burst: 6 00:12:52.036 IEEE OUI Identifier: 8d 6b 50 00:12:52.036 Multi-path I/O 00:12:52.036 May have multiple subsystem ports: Yes 00:12:52.036 May have multiple controllers: Yes 00:12:52.036 Associated with SR-IOV VF: No 00:12:52.036 Max Data Transfer Size: 131072 00:12:52.036 Max Number of Namespaces: 32 00:12:52.036 Max Number of I/O Queues: 127 00:12:52.036 NVMe Specification Version (VS): 1.3 00:12:52.036 NVMe Specification Version (Identify): 1.3 00:12:52.036 Maximum Queue Entries: 256 00:12:52.036 Contiguous Queues Required: Yes 00:12:52.036 Arbitration Mechanisms Supported 00:12:52.036 Weighted Round Robin: Not Supported 00:12:52.036 Vendor Specific: Not Supported 00:12:52.036 Reset Timeout: 15000 ms 00:12:52.036 Doorbell Stride: 4 bytes 00:12:52.037 NVM Subsystem Reset: Not Supported 00:12:52.037 Command Sets Supported 00:12:52.037 NVM Command Set: Supported 00:12:52.037 Boot Partition: Not Supported 00:12:52.037 Memory Page Size Minimum: 4096 bytes 00:12:52.037 Memory Page Size Maximum: 4096 bytes 00:12:52.037 Persistent Memory Region: Not Supported 00:12:52.037 Optional Asynchronous Events Supported 00:12:52.037 Namespace Attribute Notices: Supported 00:12:52.037 Firmware Activation Notices: Not Supported 00:12:52.037 ANA Change Notices: Not Supported 00:12:52.037 PLE Aggregate Log Change Notices: Not Supported 00:12:52.037 LBA Status Info Alert Notices: Not Supported 00:12:52.037 EGE Aggregate Log Change Notices: Not Supported 00:12:52.037 Normal NVM Subsystem Shutdown event: Not Supported 00:12:52.037 Zone Descriptor Change Notices: Not Supported 00:12:52.037 Discovery Log Change Notices: Not Supported 00:12:52.037 Controller Attributes 00:12:52.037 128-bit Host Identifier: Supported 00:12:52.037 Non-Operational Permissive Mode: Not Supported 00:12:52.037 NVM Sets: Not Supported 00:12:52.037 Read Recovery Levels: Not Supported 00:12:52.037 Endurance Groups: Not Supported 00:12:52.037 Predictable Latency Mode: Not Supported 00:12:52.037 Traffic Based Keep ALive: Not Supported 00:12:52.037 Namespace Granularity: Not Supported 00:12:52.037 SQ Associations: Not Supported 00:12:52.037 UUID List: Not Supported 00:12:52.037 Multi-Domain Subsystem: Not Supported 00:12:52.037 Fixed Capacity Management: Not Supported 00:12:52.037 Variable Capacity Management: Not Supported 00:12:52.037 Delete Endurance Group: Not Supported 00:12:52.037 Delete NVM Set: Not Supported 00:12:52.037 Extended LBA Formats Supported: Not Supported 00:12:52.037 Flexible Data Placement Supported: Not Supported 00:12:52.037 00:12:52.037 Controller Memory Buffer Support 00:12:52.037 ================================ 00:12:52.037 Supported: No 00:12:52.037 00:12:52.037 Persistent Memory Region Support 00:12:52.037 ================================ 00:12:52.037 Supported: No 00:12:52.037 00:12:52.037 Admin Command Set Attributes 00:12:52.037 ============================ 00:12:52.037 Security Send/Receive: Not Supported 00:12:52.037 Format NVM: Not Supported 00:12:52.037 Firmware Activate/Download: Not Supported 00:12:52.037 Namespace Management: Not Supported 00:12:52.037 Device Self-Test: Not Supported 00:12:52.037 Directives: Not Supported 00:12:52.037 NVMe-MI: Not Supported 00:12:52.037 Virtualization Management: Not Supported 00:12:52.037 Doorbell Buffer Config: Not Supported 00:12:52.037 Get LBA Status Capability: Not Supported 00:12:52.037 Command & Feature Lockdown Capability: Not Supported 00:12:52.037 Abort Command Limit: 4 00:12:52.037 Async Event Request Limit: 4 00:12:52.037 Number of Firmware Slots: N/A 00:12:52.037 Firmware Slot 1 Read-Only: N/A 00:12:52.037 Firmware Activation Without Reset: N/A 00:12:52.037 Multiple Update Detection Support: N/A 00:12:52.037 Firmware Update Granularity: No Information Provided 00:12:52.037 Per-Namespace SMART Log: No 00:12:52.037 Asymmetric Namespace Access Log Page: Not Supported 00:12:52.037 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:12:52.037 Command Effects Log Page: Supported 00:12:52.037 Get Log Page Extended Data: Supported 00:12:52.037 Telemetry Log Pages: Not Supported 00:12:52.037 Persistent Event Log Pages: Not Supported 00:12:52.037 Supported Log Pages Log Page: May Support 00:12:52.037 Commands Supported & Effects Log Page: Not Supported 00:12:52.037 Feature Identifiers & Effects Log Page:May Support 00:12:52.037 NVMe-MI Commands & Effects Log Page: May Support 00:12:52.037 Data Area 4 for Telemetry Log: Not Supported 00:12:52.037 Error Log Page Entries Supported: 128 00:12:52.037 Keep Alive: Supported 00:12:52.037 Keep Alive Granularity: 10000 ms 00:12:52.037 00:12:52.037 NVM Command Set Attributes 00:12:52.037 ========================== 00:12:52.037 Submission Queue Entry Size 00:12:52.037 Max: 64 00:12:52.037 Min: 64 00:12:52.037 Completion Queue Entry Size 00:12:52.037 Max: 16 00:12:52.037 Min: 16 00:12:52.037 Number of Namespaces: 32 00:12:52.037 Compare Command: Supported 00:12:52.037 Write Uncorrectable Command: Not Supported 00:12:52.037 Dataset Management Command: Supported 00:12:52.037 Write Zeroes Command: Supported 00:12:52.037 Set Features Save Field: Not Supported 00:12:52.037 Reservations: Not Supported 00:12:52.037 Timestamp: Not Supported 00:12:52.037 Copy: Supported 00:12:52.037 Volatile Write Cache: Present 00:12:52.037 Atomic Write Unit (Normal): 1 00:12:52.037 Atomic Write Unit (PFail): 1 00:12:52.037 Atomic Compare & Write Unit: 1 00:12:52.037 Fused Compare & Write: Supported 00:12:52.037 Scatter-Gather List 00:12:52.037 SGL Command Set: Supported (Dword aligned) 00:12:52.037 SGL Keyed: Not Supported 00:12:52.037 SGL Bit Bucket Descriptor: Not Supported 00:12:52.037 SGL Metadata Pointer: Not Supported 00:12:52.037 Oversized SGL: Not Supported 00:12:52.037 SGL Metadata Address: Not Supported 00:12:52.037 SGL Offset: Not Supported 00:12:52.037 Transport SGL Data Block: Not Supported 00:12:52.037 Replay Protected Memory Block: Not Supported 00:12:52.037 00:12:52.037 Firmware Slot Information 00:12:52.037 ========================= 00:12:52.037 Active slot: 1 00:12:52.037 Slot 1 Firmware Revision: 24.01.1 00:12:52.037 00:12:52.037 00:12:52.037 Commands Supported and Effects 00:12:52.037 ============================== 00:12:52.037 Admin Commands 00:12:52.037 -------------- 00:12:52.037 Get Log Page (02h): Supported 00:12:52.037 Identify (06h): Supported 00:12:52.037 Abort (08h): Supported 00:12:52.037 Set Features (09h): Supported 00:12:52.037 Get Features (0Ah): Supported 00:12:52.037 Asynchronous Event Request (0Ch): Supported 00:12:52.037 Keep Alive (18h): Supported 00:12:52.037 I/O Commands 00:12:52.037 ------------ 00:12:52.037 Flush (00h): Supported LBA-Change 00:12:52.037 Write (01h): Supported LBA-Change 00:12:52.037 Read (02h): Supported 00:12:52.037 Compare (05h): Supported 00:12:52.037 Write Zeroes (08h): Supported LBA-Change 00:12:52.037 Dataset Management (09h): Supported LBA-Change 00:12:52.037 Copy (19h): Supported LBA-Change 00:12:52.037 Unknown (79h): Supported LBA-Change 00:12:52.037 Unknown (7Ah): Supported 00:12:52.037 00:12:52.037 Error Log 00:12:52.037 ========= 00:12:52.037 00:12:52.037 Arbitration 00:12:52.037 =========== 00:12:52.037 Arbitration Burst: 1 00:12:52.037 00:12:52.037 Power Management 00:12:52.037 ================ 00:12:52.037 Number of Power States: 1 00:12:52.037 Current Power State: Power State #0 00:12:52.037 Power State #0: 00:12:52.037 Max Power: 0.00 W 00:12:52.037 Non-Operational State: Operational 00:12:52.037 Entry Latency: Not Reported 00:12:52.037 Exit Latency: Not Reported 00:12:52.037 Relative Read Throughput: 0 00:12:52.037 Relative Read Latency: 0 00:12:52.037 Relative Write Throughput: 0 00:12:52.037 Relative Write Latency: 0 00:12:52.037 Idle Power: Not Reported 00:12:52.037 Active Power: Not Reported 00:12:52.037 Non-Operational Permissive Mode: Not Supported 00:12:52.037 00:12:52.037 Health Information 00:12:52.037 ================== 00:12:52.037 Critical Warnings: 00:12:52.037 Available Spare Space: OK 00:12:52.037 Temperature: OK 00:12:52.037 Device Reliability: OK 00:12:52.037 Read Only: No 00:12:52.037 Volatile Memory Backup: OK 00:12:52.037 Current Temperature: 0 Kelvin[2024-05-14 02:10:06.431988] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:52.038 [2024-05-14 02:10:06.432010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:52.038 [2024-05-14 02:10:06.432083] nvme_ctrlr.c:4206:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:12:52.038 [2024-05-14 02:10:06.432105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:52.038 [2024-05-14 02:10:06.432119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:52.038 [2024-05-14 02:10:06.432130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:52.038 [2024-05-14 02:10:06.432141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:52.038 [2024-05-14 02:10:06.434786] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:52.038 [2024-05-14 02:10:06.434824] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:12:52.038 [2024-05-14 02:10:06.435334] nvme_ctrlr.c:1069:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:12:52.038 [2024-05-14 02:10:06.435358] nvme_ctrlr.c:1072:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:12:52.038 [2024-05-14 02:10:06.436281] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:12:52.038 [2024-05-14 02:10:06.436322] nvme_ctrlr.c:1191:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:12:52.038 [2024-05-14 02:10:06.436443] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:12:52.038 [2024-05-14 02:10:06.439791] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:52.038 (-273 Celsius) 00:12:52.038 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:52.038 Available Spare: 0% 00:12:52.038 Available Spare Threshold: 0% 00:12:52.038 Life Percentage Used: 0% 00:12:52.038 Data Units Read: 0 00:12:52.038 Data Units Written: 0 00:12:52.038 Host Read Commands: 0 00:12:52.038 Host Write Commands: 0 00:12:52.038 Controller Busy Time: 0 minutes 00:12:52.038 Power Cycles: 0 00:12:52.038 Power On Hours: 0 hours 00:12:52.038 Unsafe Shutdowns: 0 00:12:52.038 Unrecoverable Media Errors: 0 00:12:52.038 Lifetime Error Log Entries: 0 00:12:52.038 Warning Temperature Time: 0 minutes 00:12:52.038 Critical Temperature Time: 0 minutes 00:12:52.038 00:12:52.038 Number of Queues 00:12:52.038 ================ 00:12:52.038 Number of I/O Submission Queues: 127 00:12:52.038 Number of I/O Completion Queues: 127 00:12:52.038 00:12:52.038 Active Namespaces 00:12:52.038 ================= 00:12:52.038 Namespace ID:1 00:12:52.038 Error Recovery Timeout: Unlimited 00:12:52.038 Command Set Identifier: NVM (00h) 00:12:52.038 Deallocate: Supported 00:12:52.038 Deallocated/Unwritten Error: Not Supported 00:12:52.038 Deallocated Read Value: Unknown 00:12:52.038 Deallocate in Write Zeroes: Not Supported 00:12:52.038 Deallocated Guard Field: 0xFFFF 00:12:52.038 Flush: Supported 00:12:52.038 Reservation: Supported 00:12:52.038 Namespace Sharing Capabilities: Multiple Controllers 00:12:52.038 Size (in LBAs): 131072 (0GiB) 00:12:52.038 Capacity (in LBAs): 131072 (0GiB) 00:12:52.038 Utilization (in LBAs): 131072 (0GiB) 00:12:52.038 NGUID: B97EA14F7BC24C1B9C2B0069EA331B54 00:12:52.038 UUID: b97ea14f-7bc2-4c1b-9c2b-0069ea331b54 00:12:52.038 Thin Provisioning: Not Supported 00:12:52.038 Per-NS Atomic Units: Yes 00:12:52.038 Atomic Boundary Size (Normal): 0 00:12:52.038 Atomic Boundary Size (PFail): 0 00:12:52.038 Atomic Boundary Offset: 0 00:12:52.038 Maximum Single Source Range Length: 65535 00:12:52.038 Maximum Copy Length: 65535 00:12:52.038 Maximum Source Range Count: 1 00:12:52.038 NGUID/EUI64 Never Reused: No 00:12:52.038 Namespace Write Protected: No 00:12:52.038 Number of LBA Formats: 1 00:12:52.038 Current LBA Format: LBA Format #00 00:12:52.038 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:52.038 00:12:52.038 02:10:06 -- target/nvmf_vfio_user.sh@84 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:57.300 Initializing NVMe Controllers 00:12:57.300 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:57.300 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:57.300 Initialization complete. Launching workers. 00:12:57.300 ======================================================== 00:12:57.300 Latency(us) 00:12:57.300 Device Information : IOPS MiB/s Average min max 00:12:57.300 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 31194.42 121.85 4102.65 1229.08 10014.52 00:12:57.300 ======================================================== 00:12:57.300 Total : 31194.42 121.85 4102.65 1229.08 10014.52 00:12:57.300 00:12:57.300 02:10:11 -- target/nvmf_vfio_user.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:02.592 Initializing NVMe Controllers 00:13:02.592 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:02.592 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:02.592 Initialization complete. Launching workers. 00:13:02.592 ======================================================== 00:13:02.592 Latency(us) 00:13:02.592 Device Information : IOPS MiB/s Average min max 00:13:02.592 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15786.97 61.67 8109.02 4991.19 18551.08 00:13:02.592 ======================================================== 00:13:02.592 Total : 15786.97 61.67 8109.02 4991.19 18551.08 00:13:02.592 00:13:02.593 02:10:17 -- target/nvmf_vfio_user.sh@86 -- # /home/vagrant/spdk_repo/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:09.150 Initializing NVMe Controllers 00:13:09.150 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:09.150 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:09.150 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:13:09.150 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:13:09.150 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:13:09.150 Initialization complete. Launching workers. 00:13:09.150 Starting thread on core 2 00:13:09.150 Starting thread on core 3 00:13:09.150 Starting thread on core 1 00:13:09.150 02:10:22 -- target/nvmf_vfio_user.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:13:11.680 Initializing NVMe Controllers 00:13:11.680 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:11.680 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:11.680 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:13:11.680 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:13:11.680 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:13:11.680 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:13:11.680 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:13:11.680 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:11.680 Initialization complete. Launching workers. 00:13:11.680 Starting thread on core 1 with urgent priority queue 00:13:11.680 Starting thread on core 2 with urgent priority queue 00:13:11.680 Starting thread on core 3 with urgent priority queue 00:13:11.680 Starting thread on core 0 with urgent priority queue 00:13:11.680 SPDK bdev Controller (SPDK1 ) core 0: 8042.33 IO/s 12.43 secs/100000 ios 00:13:11.680 SPDK bdev Controller (SPDK1 ) core 1: 8517.00 IO/s 11.74 secs/100000 ios 00:13:11.680 SPDK bdev Controller (SPDK1 ) core 2: 8200.67 IO/s 12.19 secs/100000 ios 00:13:11.680 SPDK bdev Controller (SPDK1 ) core 3: 9014.33 IO/s 11.09 secs/100000 ios 00:13:11.680 ======================================================== 00:13:11.680 00:13:11.680 02:10:25 -- target/nvmf_vfio_user.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:11.680 Initializing NVMe Controllers 00:13:11.680 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:11.680 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:11.680 Namespace ID: 1 size: 0GB 00:13:11.680 Initialization complete. 00:13:11.681 INFO: using host memory buffer for IO 00:13:11.681 Hello world! 00:13:11.681 02:10:26 -- target/nvmf_vfio_user.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:13.057 Initializing NVMe Controllers 00:13:13.057 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:13.057 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:13.057 Initialization complete. Launching workers. 00:13:13.057 submit (in ns) avg, min, max = 7732.5, 3649.5, 4036690.0 00:13:13.057 complete (in ns) avg, min, max = 25434.1, 2296.4, 7027448.2 00:13:13.057 00:13:13.057 Submit histogram 00:13:13.057 ================ 00:13:13.057 Range in us Cumulative Count 00:13:13.057 3.636 - 3.651: 0.0149% ( 2) 00:13:13.057 3.651 - 3.665: 0.3208% ( 41) 00:13:13.057 3.665 - 3.680: 2.2905% ( 264) 00:13:13.057 3.680 - 3.695: 7.5505% ( 705) 00:13:13.057 3.695 - 3.709: 15.1757% ( 1022) 00:13:13.057 3.709 - 3.724: 27.6281% ( 1669) 00:13:13.057 3.724 - 3.753: 47.6386% ( 2682) 00:13:13.057 3.753 - 3.782: 64.2990% ( 2233) 00:13:13.057 3.782 - 3.811: 73.4462% ( 1226) 00:13:13.057 3.811 - 3.840: 79.1092% ( 759) 00:13:13.057 3.840 - 3.869: 83.1157% ( 537) 00:13:13.057 3.869 - 3.898: 86.6075% ( 468) 00:13:13.057 3.898 - 3.927: 89.3382% ( 366) 00:13:13.057 3.927 - 3.956: 91.5094% ( 291) 00:13:13.057 3.956 - 3.985: 93.0463% ( 206) 00:13:13.057 3.985 - 4.015: 94.0013% ( 128) 00:13:13.057 4.015 - 4.044: 94.9041% ( 121) 00:13:13.057 4.044 - 4.073: 95.5831% ( 91) 00:13:13.057 4.073 - 4.102: 96.1427% ( 75) 00:13:13.057 4.102 - 4.131: 96.5008% ( 48) 00:13:13.057 4.131 - 4.160: 96.7246% ( 30) 00:13:13.057 4.160 - 4.189: 97.0081% ( 38) 00:13:13.057 4.189 - 4.218: 97.1947% ( 25) 00:13:13.057 4.218 - 4.247: 97.3588% ( 22) 00:13:13.057 4.247 - 4.276: 97.4409% ( 11) 00:13:13.057 4.276 - 4.305: 97.5379% ( 13) 00:13:13.057 4.305 - 4.335: 97.6423% ( 14) 00:13:13.057 4.335 - 4.364: 97.7244% ( 11) 00:13:13.057 4.364 - 4.393: 97.7766% ( 7) 00:13:13.057 4.393 - 4.422: 97.8661% ( 12) 00:13:13.057 4.422 - 4.451: 97.8960% ( 4) 00:13:13.057 4.451 - 4.480: 97.9184% ( 3) 00:13:13.057 4.480 - 4.509: 98.0079% ( 12) 00:13:13.057 4.509 - 4.538: 98.0452% ( 5) 00:13:13.057 4.538 - 4.567: 98.0974% ( 7) 00:13:13.057 4.567 - 4.596: 98.1795% ( 11) 00:13:13.057 4.596 - 4.625: 98.2317% ( 7) 00:13:13.057 4.625 - 4.655: 98.3063% ( 10) 00:13:13.057 4.655 - 4.684: 98.3511% ( 6) 00:13:13.057 4.684 - 4.713: 98.3884% ( 5) 00:13:13.057 4.713 - 4.742: 98.4556% ( 9) 00:13:13.057 4.742 - 4.771: 98.5003% ( 6) 00:13:13.057 4.771 - 4.800: 98.5526% ( 7) 00:13:13.057 4.800 - 4.829: 98.6421% ( 12) 00:13:13.057 4.829 - 4.858: 98.7242% ( 11) 00:13:13.057 4.858 - 4.887: 98.7839% ( 8) 00:13:13.057 4.887 - 4.916: 98.8361% ( 7) 00:13:13.057 4.916 - 4.945: 98.8883% ( 7) 00:13:13.057 4.945 - 4.975: 98.9256% ( 5) 00:13:13.057 4.975 - 5.004: 98.9555% ( 4) 00:13:13.057 5.004 - 5.033: 98.9928% ( 5) 00:13:13.057 5.033 - 5.062: 99.0151% ( 3) 00:13:13.057 5.062 - 5.091: 99.0525% ( 5) 00:13:13.057 5.091 - 5.120: 99.0674% ( 2) 00:13:13.057 5.120 - 5.149: 99.0972% ( 4) 00:13:13.057 5.178 - 5.207: 99.1047% ( 1) 00:13:13.057 5.207 - 5.236: 99.1121% ( 1) 00:13:13.057 5.265 - 5.295: 99.1345% ( 3) 00:13:13.057 5.295 - 5.324: 99.1494% ( 2) 00:13:13.057 5.324 - 5.353: 99.1644% ( 2) 00:13:13.057 5.353 - 5.382: 99.1793% ( 2) 00:13:13.057 5.411 - 5.440: 99.2017% ( 3) 00:13:13.057 5.440 - 5.469: 99.2091% ( 1) 00:13:13.057 5.469 - 5.498: 99.2241% ( 2) 00:13:13.057 5.498 - 5.527: 99.2315% ( 1) 00:13:13.057 5.556 - 5.585: 99.2390% ( 1) 00:13:13.057 5.585 - 5.615: 99.2464% ( 1) 00:13:13.057 5.615 - 5.644: 99.2539% ( 1) 00:13:13.057 5.644 - 5.673: 99.2688% ( 2) 00:13:13.057 5.760 - 5.789: 99.2763% ( 1) 00:13:13.057 5.818 - 5.847: 99.2837% ( 1) 00:13:13.057 5.876 - 5.905: 99.2912% ( 1) 00:13:13.057 5.905 - 5.935: 99.2987% ( 1) 00:13:13.057 5.964 - 5.993: 99.3136% ( 2) 00:13:13.057 6.080 - 6.109: 99.3210% ( 1) 00:13:13.057 6.109 - 6.138: 99.3285% ( 1) 00:13:13.057 6.255 - 6.284: 99.3360% ( 1) 00:13:13.057 6.284 - 6.313: 99.3434% ( 1) 00:13:13.057 6.342 - 6.371: 99.3509% ( 1) 00:13:13.057 6.662 - 6.691: 99.3584% ( 1) 00:13:13.057 6.895 - 6.924: 99.3658% ( 1) 00:13:13.057 6.924 - 6.953: 99.3733% ( 1) 00:13:13.057 7.069 - 7.098: 99.3807% ( 1) 00:13:13.057 7.127 - 7.156: 99.3882% ( 1) 00:13:13.057 7.273 - 7.302: 99.3957% ( 1) 00:13:13.057 7.389 - 7.418: 99.4031% ( 1) 00:13:13.057 7.447 - 7.505: 99.4106% ( 1) 00:13:13.057 7.680 - 7.738: 99.4180% ( 1) 00:13:13.057 7.971 - 8.029: 99.4255% ( 1) 00:13:13.057 8.145 - 8.204: 99.4404% ( 2) 00:13:13.057 8.320 - 8.378: 99.4479% ( 1) 00:13:13.057 8.436 - 8.495: 99.4553% ( 1) 00:13:13.057 8.495 - 8.553: 99.4628% ( 1) 00:13:13.057 8.553 - 8.611: 99.4703% ( 1) 00:13:13.057 8.611 - 8.669: 99.4777% ( 1) 00:13:13.057 8.727 - 8.785: 99.4852% ( 1) 00:13:13.057 8.844 - 8.902: 99.4927% ( 1) 00:13:13.057 8.902 - 8.960: 99.5076% ( 2) 00:13:13.057 8.960 - 9.018: 99.5150% ( 1) 00:13:13.057 9.018 - 9.076: 99.5225% ( 1) 00:13:13.057 9.135 - 9.193: 99.5374% ( 2) 00:13:13.057 9.193 - 9.251: 99.5523% ( 2) 00:13:13.057 9.367 - 9.425: 99.5673% ( 2) 00:13:13.057 9.542 - 9.600: 99.5747% ( 1) 00:13:13.057 9.600 - 9.658: 99.5822% ( 1) 00:13:13.057 9.658 - 9.716: 99.5896% ( 1) 00:13:13.057 9.775 - 9.833: 99.5971% ( 1) 00:13:13.057 9.833 - 9.891: 99.6120% ( 2) 00:13:13.057 9.891 - 9.949: 99.6195% ( 1) 00:13:13.057 9.949 - 10.007: 99.6344% ( 2) 00:13:13.057 10.007 - 10.065: 99.6419% ( 1) 00:13:13.057 10.065 - 10.124: 99.6493% ( 1) 00:13:13.057 10.182 - 10.240: 99.6643% ( 2) 00:13:13.057 10.298 - 10.356: 99.6941% ( 4) 00:13:13.057 10.356 - 10.415: 99.7165% ( 3) 00:13:13.057 10.415 - 10.473: 99.7314% ( 2) 00:13:13.057 10.473 - 10.531: 99.7389% ( 1) 00:13:13.057 10.589 - 10.647: 99.7463% ( 1) 00:13:13.057 10.647 - 10.705: 99.7538% ( 1) 00:13:13.057 10.705 - 10.764: 99.7612% ( 1) 00:13:13.057 10.996 - 11.055: 99.7836% ( 3) 00:13:13.057 11.287 - 11.345: 99.7911% ( 1) 00:13:13.057 11.345 - 11.404: 99.7986% ( 1) 00:13:13.057 11.695 - 11.753: 99.8060% ( 1) 00:13:13.057 11.811 - 11.869: 99.8135% ( 1) 00:13:13.057 12.451 - 12.509: 99.8209% ( 1) 00:13:13.057 12.975 - 13.033: 99.8433% ( 3) 00:13:13.057 13.149 - 13.207: 99.8508% ( 1) 00:13:13.057 13.615 - 13.673: 99.8582% ( 1) 00:13:13.057 14.429 - 14.487: 99.8657% ( 1) 00:13:13.057 14.836 - 14.895: 99.8732% ( 1) 00:13:13.057 15.011 - 15.127: 99.8806% ( 1) 00:13:13.057 15.360 - 15.476: 99.8881% ( 1) 00:13:13.057 15.476 - 15.593: 99.8955% ( 1) 00:13:13.057 15.942 - 16.058: 99.9030% ( 1) 00:13:13.057 3991.738 - 4021.527: 99.9627% ( 8) 00:13:13.057 4021.527 - 4051.316: 100.0000% ( 5) 00:13:13.057 00:13:13.057 Complete histogram 00:13:13.057 ================== 00:13:13.057 Range in us Cumulative Count 00:13:13.057 2.284 - 2.298: 0.0373% ( 5) 00:13:13.057 2.298 - 2.313: 8.1624% ( 1089) 00:13:13.057 2.313 - 2.327: 66.4478% ( 7812) 00:13:13.057 2.327 - 2.342: 86.2867% ( 2659) 00:13:13.057 2.342 - 2.356: 88.6742% ( 320) 00:13:13.057 2.356 - 2.371: 90.2410% ( 210) 00:13:13.058 2.371 - 2.385: 93.6656% ( 459) 00:13:13.058 2.385 - 2.400: 96.2322% ( 344) 00:13:13.058 2.400 - 2.415: 96.7843% ( 74) 00:13:13.058 2.415 - 2.429: 96.9783% ( 26) 00:13:13.058 2.429 - 2.444: 97.2021% ( 30) 00:13:13.058 2.444 - 2.458: 97.3140% ( 15) 00:13:13.058 2.458 - 2.473: 97.3886% ( 10) 00:13:13.058 2.473 - 2.487: 97.4409% ( 7) 00:13:13.058 2.487 - 2.502: 97.4707% ( 4) 00:13:13.058 2.502 - 2.516: 97.5006% ( 4) 00:13:13.058 2.516 - 2.531: 97.5453% ( 6) 00:13:13.058 2.531 - 2.545: 97.5901% ( 6) 00:13:13.058 2.545 - 2.560: 97.6199% ( 4) 00:13:13.058 2.560 - 2.575: 97.6274% ( 1) 00:13:13.058 2.575 - 2.589: 97.6498% ( 3) 00:13:13.058 2.589 - 2.604: 97.6722% ( 3) 00:13:13.058 2.604 - 2.618: 97.7020% ( 4) 00:13:13.058 2.618 - 2.633: 97.7393% ( 5) 00:13:13.058 2.633 - 2.647: 97.7542% ( 2) 00:13:13.058 2.647 - 2.662: 97.7990% ( 6) 00:13:13.058 2.662 - 2.676: 97.8438% ( 6) 00:13:13.058 2.676 - 2.691: 97.8587% ( 2) 00:13:13.058 2.691 - 2.705: 97.8811% ( 3) 00:13:13.058 2.705 - 2.720: 97.9408% ( 8) 00:13:13.058 2.720 - 2.735: 97.9855% ( 6) 00:13:13.058 2.735 - 2.749: 98.0004% ( 2) 00:13:13.058 2.749 - 2.764: 98.0228% ( 3) 00:13:13.058 2.764 - 2.778: 98.0527% ( 4) 00:13:13.058 2.778 - 2.793: 98.0974% ( 6) 00:13:13.058 2.793 - 2.807: 98.1198% ( 3) 00:13:13.058 2.807 - 2.822: 98.1273% ( 1) 00:13:13.058 2.822 - 2.836: 98.1795% ( 7) 00:13:13.058 2.836 - 2.851: 98.1944% ( 2) 00:13:13.058 2.851 - 2.865: 98.2094% ( 2) 00:13:13.058 2.865 - 2.880: 98.2168% ( 1) 00:13:13.058 2.880 - 2.895: 98.2317% ( 2) 00:13:13.058 2.895 - 2.909: 98.2467% ( 2) 00:13:13.058 2.909 - 2.924: 98.2616% ( 2) 00:13:13.058 2.924 - 2.938: 98.2765% ( 2) 00:13:13.058 2.938 - 2.953: 98.2989% ( 3) 00:13:13.058 2.953 - 2.967: 98.3138% ( 2) 00:13:13.058 2.967 - 2.982: 98.3213% ( 1) 00:13:13.058 2.982 - 2.996: 98.3437% ( 3) 00:13:13.058 2.996 - 3.011: 98.3511% ( 1) 00:13:13.058 3.011 - 3.025: 98.3810% ( 4) 00:13:13.058 3.040 - 3.055: 98.4033% ( 3) 00:13:13.058 3.055 - 3.069: 98.4183% ( 2) 00:13:13.058 3.069 - 3.084: 98.4481% ( 4) 00:13:13.058 3.084 - 3.098: 98.4556% ( 1) 00:13:13.058 3.113 - 3.127: 98.4854% ( 4) 00:13:13.058 3.127 - 3.142: 98.5227% ( 5) 00:13:13.058 3.142 - 3.156: 98.5526% ( 4) 00:13:13.058 3.156 - 3.171: 98.5749% ( 3) 00:13:13.058 3.171 - 3.185: 98.5824% ( 1) 00:13:13.058 3.185 - 3.200: 98.5973% ( 2) 00:13:13.058 3.200 - 3.215: 98.6048% ( 1) 00:13:13.058 3.215 - 3.229: 98.6197% ( 2) 00:13:13.058 3.229 - 3.244: 98.6272% ( 1) 00:13:13.058 3.244 - 3.258: 98.6346% ( 1) 00:13:13.058 3.287 - 3.302: 98.6570% ( 3) 00:13:13.058 3.316 - 3.331: 98.6645% ( 1) 00:13:13.058 3.331 - 3.345: 98.6719% ( 1) 00:13:13.058 3.360 - 3.375: 98.6869% ( 2) 00:13:13.058 3.404 - 3.418: 98.6943% ( 1) 00:13:13.058 3.462 - 3.476: 98.7018% ( 1) 00:13:13.058 3.476 - 3.491: 98.7092% ( 1) 00:13:13.058 3.491 - 3.505: 98.7167% ( 1) 00:13:13.058 3.505 - 3.520: 98.7242% ( 1) 00:13:13.058 3.520 - 3.535: 98.7316% ( 1) 00:13:13.058 3.535 - 3.549: 98.7391% ( 1) 00:13:13.058 3.549 - 3.564: 98.7465% ( 1) 00:13:13.058 3.564 - 3.578: 98.7540% ( 1) 00:13:13.058 3.593 - 3.607: 98.7615% ( 1) 00:13:13.058 3.680 - 3.695: 98.7689% ( 1) 00:13:13.058 3.709 - 3.724: 98.7764% ( 1) 00:13:13.058 3.724 - 3.753: 98.7913% ( 2) 00:13:13.058 3.753 - 3.782: 98.8062% ( 2) 00:13:13.058 3.782 - 3.811: 98.8286% ( 3) 00:13:13.058 3.811 - 3.840: 98.8361% ( 1) 00:13:13.058 3.840 - 3.869: 98.8585% ( 3) 00:13:13.058 3.869 - 3.898: 98.8808% ( 3) 00:13:13.058 3.927 - 3.956: 98.8958% ( 2) 00:13:13.058 3.985 - 4.015: 98.9032% ( 1) 00:13:13.058 4.015 - 4.044: 98.9182% ( 2) 00:13:13.058 4.073 - 4.102: 98.9331% ( 2) 00:13:13.058 4.102 - 4.131: 98.9405% ( 1) 00:13:13.058 4.160 - 4.189: 98.9480% ( 1) 00:13:13.058 4.247 - 4.276: 98.9629% ( 2) 00:13:13.058 4.335 - 4.364: 98.9704% ( 1) 00:13:13.058 4.451 - 4.480: 98.9778% ( 1) 00:13:13.058 4.538 - 4.567: 98.9928% ( 2) 00:13:13.058 4.684 - 4.713: 99.0002% ( 1) 00:13:13.058 4.916 - 4.945: 99.0077% ( 1) 00:13:13.058 5.876 - 5.905: 99.0151% ( 1) 00:13:13.058 6.255 - 6.284: 99.0226% ( 1) 00:13:13.058 6.284 - 6.313: 99.0301% ( 1) 00:13:13.058 6.836 - 6.865: 99.0375% ( 1) 00:13:13.058 6.895 - 6.924: 99.0450% ( 1) 00:13:13.058 7.098 - 7.127: 99.0525% ( 1) 00:13:13.058 7.273 - 7.302: 99.0599% ( 1) 00:13:13.058 7.505 - 7.564: 99.0674% ( 1) 00:13:13.058 7.564 - 7.622: 99.0748% ( 1) 00:13:13.058 7.680 - 7.738: 99.0823% ( 1) 00:13:13.058 7.913 - 7.971: 99.0898% ( 1) 00:13:13.058 7.971 - 8.029: 99.0972% ( 1) 00:13:13.058 8.029 - 8.087: 99.1047% ( 1) 00:13:13.058 8.145 - 8.204: 99.1121% ( 1) 00:13:13.058 8.611 - 8.669: 99.1271% ( 2) 00:13:13.058 8.727 - 8.785: 99.1345% ( 1) 00:13:13.058 8.844 - 8.902: 99.1420% ( 1) 00:13:13.058 8.902 - 8.960: 99.1494% ( 1) 00:13:13.058 8.960 - 9.018: 99.1569% ( 1) 00:13:13.058 9.018 - 9.076: 99.1644% ( 1) 00:13:13.058 9.135 - 9.193: 99.1718% ( 1) 00:13:13.058 9.193 - 9.251: 99.1793% ( 1) 00:13:13.058 9.251 - 9.309: 99.1867% ( 1) 00:13:13.058 9.367 - 9.425: 99.2017% ( 2) 00:13:13.058 9.425 - 9.484: 99.2091% ( 1) 00:13:13.058 9.716 - 9.775: 99.2166% ( 1) 00:13:13.058 9.949 - 10.007: 99.2241% ( 1) 00:13:13.058 10.124 - 10.182: 99.2315% ( 1) 00:13:13.058 10.473 - 10.531: 99.2390% ( 1) 00:13:13.058 10.705 - 10.764: 99.2539% ( 2) 00:13:13.058 10.822 - 10.880: 99.2614% ( 1) 00:13:13.058 10.996 - 11.055: 99.2688% ( 1) 00:13:13.058 11.229 - 11.287: 99.2763% ( 1) 00:13:13.058 11.287 - 11.345: 99.2837% ( 1) 00:13:13.058 11.636 - 11.695: 99.2912% ( 1) 00:13:13.058 11.753 - 11.811: 99.2987% ( 1) 00:13:13.058 11.811 - 11.869: 99.3061% ( 1) 00:13:13.058 12.393 - 12.451: 99.3136% ( 1) 00:13:13.058 12.509 - 12.567: 99.3210% ( 1) 00:13:13.058 13.033 - 13.091: 99.3285% ( 1) 00:13:13.058 13.091 - 13.149: 99.3434% ( 2) 00:13:13.058 13.149 - 13.207: 99.3509% ( 1) 00:13:13.058 13.731 - 13.789: 99.3584% ( 1) 00:13:13.058 13.847 - 13.905: 99.3658% ( 1) 00:13:13.058 13.964 - 14.022: 99.3733% ( 1) 00:13:13.058 14.138 - 14.196: 99.3882% ( 2) 00:13:13.058 14.313 - 14.371: 99.3957% ( 1) 00:13:13.058 14.604 - 14.662: 99.4031% ( 1) 00:13:13.058 14.662 - 14.720: 99.4106% ( 1) 00:13:13.058 16.640 - 16.756: 99.4180% ( 1) 00:13:13.058 18.618 - 18.735: 99.4255% ( 1) 00:13:13.058 1042.618 - 1050.065: 99.4330% ( 1) 00:13:13.058 3991.738 - 4021.527: 99.9254% ( 66) 00:13:13.058 4021.527 - 4051.316: 99.9925% ( 9) 00:13:13.058 7000.436 - 7030.225: 100.0000% ( 1) 00:13:13.058 00:13:13.058 02:10:27 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:13:13.058 02:10:27 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:13.058 02:10:27 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:13:13.058 02:10:27 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:13:13.058 02:10:27 -- target/nvmf_vfio_user.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:13.317 [2024-05-14 02:10:27.722055] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:13:13.317 [ 00:13:13.317 { 00:13:13.317 "allow_any_host": true, 00:13:13.317 "hosts": [], 00:13:13.317 "listen_addresses": [], 00:13:13.317 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:13.317 "subtype": "Discovery" 00:13:13.317 }, 00:13:13.317 { 00:13:13.317 "allow_any_host": true, 00:13:13.317 "hosts": [], 00:13:13.317 "listen_addresses": [ 00:13:13.317 { 00:13:13.317 "adrfam": "IPv4", 00:13:13.317 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:13.317 "transport": "VFIOUSER", 00:13:13.317 "trsvcid": "0", 00:13:13.317 "trtype": "VFIOUSER" 00:13:13.317 } 00:13:13.317 ], 00:13:13.317 "max_cntlid": 65519, 00:13:13.317 "max_namespaces": 32, 00:13:13.317 "min_cntlid": 1, 00:13:13.317 "model_number": "SPDK bdev Controller", 00:13:13.317 "namespaces": [ 00:13:13.317 { 00:13:13.317 "bdev_name": "Malloc1", 00:13:13.317 "name": "Malloc1", 00:13:13.317 "nguid": "B97EA14F7BC24C1B9C2B0069EA331B54", 00:13:13.317 "nsid": 1, 00:13:13.317 "uuid": "b97ea14f-7bc2-4c1b-9c2b-0069ea331b54" 00:13:13.317 } 00:13:13.317 ], 00:13:13.317 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:13.317 "serial_number": "SPDK1", 00:13:13.317 "subtype": "NVMe" 00:13:13.317 }, 00:13:13.317 { 00:13:13.317 "allow_any_host": true, 00:13:13.317 "hosts": [], 00:13:13.317 "listen_addresses": [ 00:13:13.317 { 00:13:13.317 "adrfam": "IPv4", 00:13:13.317 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:13.317 "transport": "VFIOUSER", 00:13:13.317 "trsvcid": "0", 00:13:13.317 "trtype": "VFIOUSER" 00:13:13.317 } 00:13:13.317 ], 00:13:13.317 "max_cntlid": 65519, 00:13:13.317 "max_namespaces": 32, 00:13:13.317 "min_cntlid": 1, 00:13:13.317 "model_number": "SPDK bdev Controller", 00:13:13.317 "namespaces": [ 00:13:13.317 { 00:13:13.317 "bdev_name": "Malloc2", 00:13:13.317 "name": "Malloc2", 00:13:13.317 "nguid": "185B581E054A40F382F8BC222F808C1A", 00:13:13.317 "nsid": 1, 00:13:13.317 "uuid": "185b581e-054a-40f3-82f8-bc222f808c1a" 00:13:13.317 } 00:13:13.317 ], 00:13:13.317 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:13.317 "serial_number": "SPDK2", 00:13:13.317 "subtype": "NVMe" 00:13:13.317 } 00:13:13.317 ] 00:13:13.317 02:10:27 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:13.317 02:10:27 -- target/nvmf_vfio_user.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:13:13.317 02:10:27 -- target/nvmf_vfio_user.sh@34 -- # aerpid=69456 00:13:13.317 02:10:27 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:13.317 02:10:27 -- common/autotest_common.sh@1244 -- # local i=0 00:13:13.317 02:10:27 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:13.317 02:10:27 -- common/autotest_common.sh@1246 -- # '[' 0 -lt 200 ']' 00:13:13.317 02:10:27 -- common/autotest_common.sh@1247 -- # i=1 00:13:13.317 02:10:27 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:13:13.317 02:10:27 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:13.317 02:10:27 -- common/autotest_common.sh@1246 -- # '[' 1 -lt 200 ']' 00:13:13.317 02:10:27 -- common/autotest_common.sh@1247 -- # i=2 00:13:13.317 02:10:27 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:13:13.575 02:10:27 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:13.575 02:10:27 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:13.575 02:10:27 -- common/autotest_common.sh@1255 -- # return 0 00:13:13.575 02:10:27 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:13.575 02:10:27 -- target/nvmf_vfio_user.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:13:13.835 Malloc3 00:13:13.835 02:10:28 -- target/nvmf_vfio_user.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:13:14.093 02:10:28 -- target/nvmf_vfio_user.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:14.093 Asynchronous Event Request test 00:13:14.093 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:14.093 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:14.093 Registering asynchronous event callbacks... 00:13:14.094 Starting namespace attribute notice tests for all controllers... 00:13:14.094 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:14.094 aer_cb - Changed Namespace 00:13:14.094 Cleaning up... 00:13:14.352 [ 00:13:14.352 { 00:13:14.352 "allow_any_host": true, 00:13:14.352 "hosts": [], 00:13:14.352 "listen_addresses": [], 00:13:14.352 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:14.352 "subtype": "Discovery" 00:13:14.352 }, 00:13:14.352 { 00:13:14.352 "allow_any_host": true, 00:13:14.352 "hosts": [], 00:13:14.352 "listen_addresses": [ 00:13:14.352 { 00:13:14.352 "adrfam": "IPv4", 00:13:14.352 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:14.352 "transport": "VFIOUSER", 00:13:14.352 "trsvcid": "0", 00:13:14.352 "trtype": "VFIOUSER" 00:13:14.352 } 00:13:14.352 ], 00:13:14.352 "max_cntlid": 65519, 00:13:14.352 "max_namespaces": 32, 00:13:14.352 "min_cntlid": 1, 00:13:14.352 "model_number": "SPDK bdev Controller", 00:13:14.352 "namespaces": [ 00:13:14.352 { 00:13:14.352 "bdev_name": "Malloc1", 00:13:14.352 "name": "Malloc1", 00:13:14.352 "nguid": "B97EA14F7BC24C1B9C2B0069EA331B54", 00:13:14.352 "nsid": 1, 00:13:14.352 "uuid": "b97ea14f-7bc2-4c1b-9c2b-0069ea331b54" 00:13:14.352 }, 00:13:14.352 { 00:13:14.352 "bdev_name": "Malloc3", 00:13:14.352 "name": "Malloc3", 00:13:14.352 "nguid": "6779340765A441E99D7979D1CC155CD2", 00:13:14.352 "nsid": 2, 00:13:14.352 "uuid": "67793407-65a4-41e9-9d79-79d1cc155cd2" 00:13:14.352 } 00:13:14.352 ], 00:13:14.352 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:14.352 "serial_number": "SPDK1", 00:13:14.352 "subtype": "NVMe" 00:13:14.352 }, 00:13:14.352 { 00:13:14.352 "allow_any_host": true, 00:13:14.352 "hosts": [], 00:13:14.352 "listen_addresses": [ 00:13:14.352 { 00:13:14.352 "adrfam": "IPv4", 00:13:14.352 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:14.352 "transport": "VFIOUSER", 00:13:14.352 "trsvcid": "0", 00:13:14.352 "trtype": "VFIOUSER" 00:13:14.352 } 00:13:14.352 ], 00:13:14.352 "max_cntlid": 65519, 00:13:14.352 "max_namespaces": 32, 00:13:14.352 "min_cntlid": 1, 00:13:14.352 "model_number": "SPDK bdev Controller", 00:13:14.352 "namespaces": [ 00:13:14.352 { 00:13:14.352 "bdev_name": "Malloc2", 00:13:14.352 "name": "Malloc2", 00:13:14.352 "nguid": "185B581E054A40F382F8BC222F808C1A", 00:13:14.352 "nsid": 1, 00:13:14.352 "uuid": "185b581e-054a-40f3-82f8-bc222f808c1a" 00:13:14.352 } 00:13:14.352 ], 00:13:14.352 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:14.352 "serial_number": "SPDK2", 00:13:14.352 "subtype": "NVMe" 00:13:14.352 } 00:13:14.352 ] 00:13:14.352 02:10:28 -- target/nvmf_vfio_user.sh@44 -- # wait 69456 00:13:14.352 02:10:28 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:14.352 02:10:28 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:14.352 02:10:28 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:13:14.352 02:10:28 -- target/nvmf_vfio_user.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:14.352 [2024-05-14 02:10:28.880827] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:13:14.352 [2024-05-14 02:10:28.880883] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69493 ] 00:13:14.614 [2024-05-14 02:10:29.019760] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:13:14.614 [2024-05-14 02:10:29.034063] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:14.614 [2024-05-14 02:10:29.034117] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7ff6ceaf2000 00:13:14.614 [2024-05-14 02:10:29.035069] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:14.614 [2024-05-14 02:10:29.036075] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:14.614 [2024-05-14 02:10:29.037075] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:14.614 [2024-05-14 02:10:29.038069] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:14.614 [2024-05-14 02:10:29.039070] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:14.614 [2024-05-14 02:10:29.040083] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:14.614 [2024-05-14 02:10:29.041074] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:14.614 [2024-05-14 02:10:29.042085] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:14.614 [2024-05-14 02:10:29.043092] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:14.614 [2024-05-14 02:10:29.043128] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7ff6ce13f000 00:13:14.614 [2024-05-14 02:10:29.044491] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:14.614 [2024-05-14 02:10:29.064080] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:13:14.614 [2024-05-14 02:10:29.064136] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:13:14.614 [2024-05-14 02:10:29.069288] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:14.614 [2024-05-14 02:10:29.069378] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:14.614 [2024-05-14 02:10:29.069491] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:13:14.614 [2024-05-14 02:10:29.069520] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:13:14.614 [2024-05-14 02:10:29.069527] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:13:14.614 [2024-05-14 02:10:29.070293] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:13:14.614 [2024-05-14 02:10:29.070342] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:13:14.614 [2024-05-14 02:10:29.070365] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:13:14.614 [2024-05-14 02:10:29.071277] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:14.614 [2024-05-14 02:10:29.071313] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:13:14.615 [2024-05-14 02:10:29.071328] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:13:14.615 [2024-05-14 02:10:29.072276] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:13:14.615 [2024-05-14 02:10:29.072308] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:14.615 [2024-05-14 02:10:29.073298] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:13:14.615 [2024-05-14 02:10:29.073328] nvme_ctrlr.c:3736:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:13:14.615 [2024-05-14 02:10:29.073336] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:13:14.615 [2024-05-14 02:10:29.073346] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:14.615 [2024-05-14 02:10:29.073453] nvme_ctrlr.c:3929:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:13:14.615 [2024-05-14 02:10:29.073459] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:14.615 [2024-05-14 02:10:29.073464] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:13:14.615 [2024-05-14 02:10:29.074288] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:13:14.615 [2024-05-14 02:10:29.075285] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:13:14.615 [2024-05-14 02:10:29.076290] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:14.615 [2024-05-14 02:10:29.077345] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:14.615 [2024-05-14 02:10:29.078292] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:13:14.615 [2024-05-14 02:10:29.078322] nvme_ctrlr.c:3771:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:14.615 [2024-05-14 02:10:29.078331] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:13:14.615 [2024-05-14 02:10:29.078355] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:13:14.615 [2024-05-14 02:10:29.078367] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:13:14.615 [2024-05-14 02:10:29.078385] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:14.615 [2024-05-14 02:10:29.078391] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:14.615 [2024-05-14 02:10:29.078407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:14.615 [2024-05-14 02:10:29.085786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:14.615 [2024-05-14 02:10:29.085818] nvme_ctrlr.c:1971:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:13:14.615 [2024-05-14 02:10:29.085833] nvme_ctrlr.c:1975:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:13:14.615 [2024-05-14 02:10:29.085841] nvme_ctrlr.c:1978:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:13:14.615 [2024-05-14 02:10:29.085849] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:14.615 [2024-05-14 02:10:29.085858] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:13:14.615 [2024-05-14 02:10:29.085866] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:13:14.615 [2024-05-14 02:10:29.085877] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:13:14.615 [2024-05-14 02:10:29.085900] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:13:14.615 [2024-05-14 02:10:29.085923] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:14.615 [2024-05-14 02:10:29.093787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:14.615 [2024-05-14 02:10:29.093823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:14.615 [2024-05-14 02:10:29.093834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:14.615 [2024-05-14 02:10:29.093843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:14.615 [2024-05-14 02:10:29.093852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:14.615 [2024-05-14 02:10:29.093861] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:13:14.615 [2024-05-14 02:10:29.093883] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:14.615 [2024-05-14 02:10:29.093900] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:14.615 [2024-05-14 02:10:29.101783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:14.615 [2024-05-14 02:10:29.101847] nvme_ctrlr.c:2877:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:13:14.615 [2024-05-14 02:10:29.101858] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:14.615 [2024-05-14 02:10:29.101870] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:13:14.615 [2024-05-14 02:10:29.101884] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:13:14.615 [2024-05-14 02:10:29.101897] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:14.615 [2024-05-14 02:10:29.109791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:14.615 [2024-05-14 02:10:29.109893] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:13:14.615 [2024-05-14 02:10:29.109912] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:13:14.615 [2024-05-14 02:10:29.109924] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:14.615 [2024-05-14 02:10:29.109930] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:14.615 [2024-05-14 02:10:29.109938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:14.615 [2024-05-14 02:10:29.117796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:14.615 [2024-05-14 02:10:29.117845] nvme_ctrlr.c:4542:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:13:14.615 [2024-05-14 02:10:29.117868] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:13:14.615 [2024-05-14 02:10:29.117888] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:13:14.615 [2024-05-14 02:10:29.117905] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:14.615 [2024-05-14 02:10:29.117916] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:14.615 [2024-05-14 02:10:29.117929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:14.615 [2024-05-14 02:10:29.125784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:14.615 [2024-05-14 02:10:29.125845] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:14.615 [2024-05-14 02:10:29.125869] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:14.615 [2024-05-14 02:10:29.125887] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:14.615 [2024-05-14 02:10:29.125897] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:14.615 [2024-05-14 02:10:29.125907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:14.615 [2024-05-14 02:10:29.133783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:14.615 [2024-05-14 02:10:29.133815] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:14.615 [2024-05-14 02:10:29.133829] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:13:14.615 [2024-05-14 02:10:29.133847] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:13:14.615 [2024-05-14 02:10:29.133855] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:14.615 [2024-05-14 02:10:29.133861] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:13:14.615 [2024-05-14 02:10:29.133867] nvme_ctrlr.c:2977:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:13:14.615 [2024-05-14 02:10:29.133872] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:13:14.615 [2024-05-14 02:10:29.133879] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:13:14.615 [2024-05-14 02:10:29.133906] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:14.615 [2024-05-14 02:10:29.141794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:14.615 [2024-05-14 02:10:29.141847] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:14.615 [2024-05-14 02:10:29.149792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:14.615 [2024-05-14 02:10:29.149838] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:14.615 [2024-05-14 02:10:29.157787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:14.615 [2024-05-14 02:10:29.157824] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:14.616 [2024-05-14 02:10:29.165782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:14.616 [2024-05-14 02:10:29.165839] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:14.616 [2024-05-14 02:10:29.165852] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:14.616 [2024-05-14 02:10:29.165860] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:14.616 [2024-05-14 02:10:29.165866] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:14.616 [2024-05-14 02:10:29.165880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:14.616 [2024-05-14 02:10:29.165895] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:14.616 [2024-05-14 02:10:29.165901] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:14.616 [2024-05-14 02:10:29.165908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:14.616 [2024-05-14 02:10:29.165916] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:14.616 [2024-05-14 02:10:29.165921] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:14.616 [2024-05-14 02:10:29.165927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:14.616 [2024-05-14 02:10:29.165936] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:14.616 [2024-05-14 02:10:29.165941] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:14.616 [2024-05-14 02:10:29.165947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:14.616 [2024-05-14 02:10:29.173787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:14.616 [2024-05-14 02:10:29.173833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:14.616 [2024-05-14 02:10:29.173846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:14.616 [2024-05-14 02:10:29.173856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:14.616 ===================================================== 00:13:14.616 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:14.616 ===================================================== 00:13:14.616 Controller Capabilities/Features 00:13:14.616 ================================ 00:13:14.616 Vendor ID: 4e58 00:13:14.616 Subsystem Vendor ID: 4e58 00:13:14.616 Serial Number: SPDK2 00:13:14.616 Model Number: SPDK bdev Controller 00:13:14.616 Firmware Version: 24.01.1 00:13:14.616 Recommended Arb Burst: 6 00:13:14.616 IEEE OUI Identifier: 8d 6b 50 00:13:14.616 Multi-path I/O 00:13:14.616 May have multiple subsystem ports: Yes 00:13:14.616 May have multiple controllers: Yes 00:13:14.616 Associated with SR-IOV VF: No 00:13:14.616 Max Data Transfer Size: 131072 00:13:14.616 Max Number of Namespaces: 32 00:13:14.616 Max Number of I/O Queues: 127 00:13:14.616 NVMe Specification Version (VS): 1.3 00:13:14.616 NVMe Specification Version (Identify): 1.3 00:13:14.616 Maximum Queue Entries: 256 00:13:14.616 Contiguous Queues Required: Yes 00:13:14.616 Arbitration Mechanisms Supported 00:13:14.616 Weighted Round Robin: Not Supported 00:13:14.616 Vendor Specific: Not Supported 00:13:14.616 Reset Timeout: 15000 ms 00:13:14.616 Doorbell Stride: 4 bytes 00:13:14.616 NVM Subsystem Reset: Not Supported 00:13:14.616 Command Sets Supported 00:13:14.616 NVM Command Set: Supported 00:13:14.616 Boot Partition: Not Supported 00:13:14.616 Memory Page Size Minimum: 4096 bytes 00:13:14.616 Memory Page Size Maximum: 4096 bytes 00:13:14.616 Persistent Memory Region: Not Supported 00:13:14.616 Optional Asynchronous Events Supported 00:13:14.616 Namespace Attribute Notices: Supported 00:13:14.616 Firmware Activation Notices: Not Supported 00:13:14.616 ANA Change Notices: Not Supported 00:13:14.616 PLE Aggregate Log Change Notices: Not Supported 00:13:14.616 LBA Status Info Alert Notices: Not Supported 00:13:14.616 EGE Aggregate Log Change Notices: Not Supported 00:13:14.616 Normal NVM Subsystem Shutdown event: Not Supported 00:13:14.616 Zone Descriptor Change Notices: Not Supported 00:13:14.616 Discovery Log Change Notices: Not Supported 00:13:14.616 Controller Attributes 00:13:14.616 128-bit Host Identifier: Supported 00:13:14.616 Non-Operational Permissive Mode: Not Supported 00:13:14.616 NVM Sets: Not Supported 00:13:14.616 Read Recovery Levels: Not Supported 00:13:14.616 Endurance Groups: Not Supported 00:13:14.616 Predictable Latency Mode: Not Supported 00:13:14.616 Traffic Based Keep ALive: Not Supported 00:13:14.616 Namespace Granularity: Not Supported 00:13:14.616 SQ Associations: Not Supported 00:13:14.616 UUID List: Not Supported 00:13:14.616 Multi-Domain Subsystem: Not Supported 00:13:14.616 Fixed Capacity Management: Not Supported 00:13:14.616 Variable Capacity Management: Not Supported 00:13:14.616 Delete Endurance Group: Not Supported 00:13:14.616 Delete NVM Set: Not Supported 00:13:14.616 Extended LBA Formats Supported: Not Supported 00:13:14.616 Flexible Data Placement Supported: Not Supported 00:13:14.616 00:13:14.616 Controller Memory Buffer Support 00:13:14.616 ================================ 00:13:14.616 Supported: No 00:13:14.616 00:13:14.616 Persistent Memory Region Support 00:13:14.616 ================================ 00:13:14.616 Supported: No 00:13:14.616 00:13:14.616 Admin Command Set Attributes 00:13:14.616 ============================ 00:13:14.616 Security Send/Receive: Not Supported 00:13:14.616 Format NVM: Not Supported 00:13:14.616 Firmware Activate/Download: Not Supported 00:13:14.616 Namespace Management: Not Supported 00:13:14.616 Device Self-Test: Not Supported 00:13:14.616 Directives: Not Supported 00:13:14.616 NVMe-MI: Not Supported 00:13:14.616 Virtualization Management: Not Supported 00:13:14.616 Doorbell Buffer Config: Not Supported 00:13:14.616 Get LBA Status Capability: Not Supported 00:13:14.616 Command & Feature Lockdown Capability: Not Supported 00:13:14.616 Abort Command Limit: 4 00:13:14.616 Async Event Request Limit: 4 00:13:14.616 Number of Firmware Slots: N/A 00:13:14.616 Firmware Slot 1 Read-Only: N/A 00:13:14.616 Firmware Activation Without Reset: N/A 00:13:14.616 Multiple Update Detection Support: N/A 00:13:14.616 Firmware Update Granularity: No Information Provided 00:13:14.616 Per-Namespace SMART Log: No 00:13:14.616 Asymmetric Namespace Access Log Page: Not Supported 00:13:14.616 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:13:14.616 Command Effects Log Page: Supported 00:13:14.616 Get Log Page Extended Data: Supported 00:13:14.616 Telemetry Log Pages: Not Supported 00:13:14.616 Persistent Event Log Pages: Not Supported 00:13:14.616 Supported Log Pages Log Page: May Support 00:13:14.616 Commands Supported & Effects Log Page: Not Supported 00:13:14.616 Feature Identifiers & Effects Log Page:May Support 00:13:14.616 NVMe-MI Commands & Effects Log Page: May Support 00:13:14.616 Data Area 4 for Telemetry Log: Not Supported 00:13:14.616 Error Log Page Entries Supported: 128 00:13:14.616 Keep Alive: Supported 00:13:14.616 Keep Alive Granularity: 10000 ms 00:13:14.616 00:13:14.616 NVM Command Set Attributes 00:13:14.616 ========================== 00:13:14.616 Submission Queue Entry Size 00:13:14.616 Max: 64 00:13:14.616 Min: 64 00:13:14.616 Completion Queue Entry Size 00:13:14.616 Max: 16 00:13:14.616 Min: 16 00:13:14.616 Number of Namespaces: 32 00:13:14.616 Compare Command: Supported 00:13:14.616 Write Uncorrectable Command: Not Supported 00:13:14.616 Dataset Management Command: Supported 00:13:14.616 Write Zeroes Command: Supported 00:13:14.616 Set Features Save Field: Not Supported 00:13:14.616 Reservations: Not Supported 00:13:14.616 Timestamp: Not Supported 00:13:14.616 Copy: Supported 00:13:14.616 Volatile Write Cache: Present 00:13:14.616 Atomic Write Unit (Normal): 1 00:13:14.616 Atomic Write Unit (PFail): 1 00:13:14.616 Atomic Compare & Write Unit: 1 00:13:14.616 Fused Compare & Write: Supported 00:13:14.616 Scatter-Gather List 00:13:14.616 SGL Command Set: Supported (Dword aligned) 00:13:14.616 SGL Keyed: Not Supported 00:13:14.616 SGL Bit Bucket Descriptor: Not Supported 00:13:14.616 SGL Metadata Pointer: Not Supported 00:13:14.616 Oversized SGL: Not Supported 00:13:14.616 SGL Metadata Address: Not Supported 00:13:14.616 SGL Offset: Not Supported 00:13:14.616 Transport SGL Data Block: Not Supported 00:13:14.616 Replay Protected Memory Block: Not Supported 00:13:14.616 00:13:14.616 Firmware Slot Information 00:13:14.616 ========================= 00:13:14.616 Active slot: 1 00:13:14.616 Slot 1 Firmware Revision: 24.01.1 00:13:14.616 00:13:14.616 00:13:14.616 Commands Supported and Effects 00:13:14.616 ============================== 00:13:14.616 Admin Commands 00:13:14.616 -------------- 00:13:14.616 Get Log Page (02h): Supported 00:13:14.616 Identify (06h): Supported 00:13:14.616 Abort (08h): Supported 00:13:14.616 Set Features (09h): Supported 00:13:14.616 Get Features (0Ah): Supported 00:13:14.616 Asynchronous Event Request (0Ch): Supported 00:13:14.616 Keep Alive (18h): Supported 00:13:14.616 I/O Commands 00:13:14.616 ------------ 00:13:14.616 Flush (00h): Supported LBA-Change 00:13:14.616 Write (01h): Supported LBA-Change 00:13:14.617 Read (02h): Supported 00:13:14.617 Compare (05h): Supported 00:13:14.617 Write Zeroes (08h): Supported LBA-Change 00:13:14.617 Dataset Management (09h): Supported LBA-Change 00:13:14.617 Copy (19h): Supported LBA-Change 00:13:14.617 Unknown (79h): Supported LBA-Change 00:13:14.617 Unknown (7Ah): Supported 00:13:14.617 00:13:14.617 Error Log 00:13:14.617 ========= 00:13:14.617 00:13:14.617 Arbitration 00:13:14.617 =========== 00:13:14.617 Arbitration Burst: 1 00:13:14.617 00:13:14.617 Power Management 00:13:14.617 ================ 00:13:14.617 Number of Power States: 1 00:13:14.617 Current Power State: Power State #0 00:13:14.617 Power State #0: 00:13:14.617 Max Power: 0.00 W 00:13:14.617 Non-Operational State: Operational 00:13:14.617 Entry Latency: Not Reported 00:13:14.617 Exit Latency: Not Reported 00:13:14.617 Relative Read Throughput: 0 00:13:14.617 Relative Read Latency: 0 00:13:14.617 Relative Write Throughput: 0 00:13:14.617 Relative Write Latency: 0 00:13:14.617 Idle Power: Not Reported 00:13:14.617 Active Power: Not Reported 00:13:14.617 Non-Operational Permissive Mode: Not Supported 00:13:14.617 00:13:14.617 Health Information 00:13:14.617 ================== 00:13:14.617 Critical Warnings: 00:13:14.617 Available Spare Space: OK 00:13:14.617 Temperature: OK 00:13:14.617 Device Reliability: OK 00:13:14.617 Read Only: No 00:13:14.617 Volatile Memory Backup: OK 00:13:14.617 Current Temperature: 0 Kelvin[2024-05-14 02:10:29.173976] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:14.617 [2024-05-14 02:10:29.181786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:14.617 [2024-05-14 02:10:29.181856] nvme_ctrlr.c:4206:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:13:14.617 [2024-05-14 02:10:29.181873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:14.617 [2024-05-14 02:10:29.181885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:14.617 [2024-05-14 02:10:29.181897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:14.617 [2024-05-14 02:10:29.181909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:14.617 [2024-05-14 02:10:29.182015] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:14.617 [2024-05-14 02:10:29.182036] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:13:14.617 [2024-05-14 02:10:29.183057] nvme_ctrlr.c:1069:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:13:14.617 [2024-05-14 02:10:29.183082] nvme_ctrlr.c:1072:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:13:14.617 [2024-05-14 02:10:29.184009] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:13:14.617 [2024-05-14 02:10:29.184046] nvme_ctrlr.c:1191:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:13:14.617 [2024-05-14 02:10:29.184152] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:13:14.617 [2024-05-14 02:10:29.185647] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:14.877 (-273 Celsius) 00:13:14.877 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:14.877 Available Spare: 0% 00:13:14.877 Available Spare Threshold: 0% 00:13:14.877 Life Percentage Used: 0% 00:13:14.877 Data Units Read: 0 00:13:14.877 Data Units Written: 0 00:13:14.877 Host Read Commands: 0 00:13:14.877 Host Write Commands: 0 00:13:14.877 Controller Busy Time: 0 minutes 00:13:14.877 Power Cycles: 0 00:13:14.877 Power On Hours: 0 hours 00:13:14.877 Unsafe Shutdowns: 0 00:13:14.877 Unrecoverable Media Errors: 0 00:13:14.877 Lifetime Error Log Entries: 0 00:13:14.877 Warning Temperature Time: 0 minutes 00:13:14.877 Critical Temperature Time: 0 minutes 00:13:14.877 00:13:14.877 Number of Queues 00:13:14.877 ================ 00:13:14.877 Number of I/O Submission Queues: 127 00:13:14.877 Number of I/O Completion Queues: 127 00:13:14.877 00:13:14.877 Active Namespaces 00:13:14.877 ================= 00:13:14.877 Namespace ID:1 00:13:14.877 Error Recovery Timeout: Unlimited 00:13:14.877 Command Set Identifier: NVM (00h) 00:13:14.877 Deallocate: Supported 00:13:14.877 Deallocated/Unwritten Error: Not Supported 00:13:14.877 Deallocated Read Value: Unknown 00:13:14.877 Deallocate in Write Zeroes: Not Supported 00:13:14.877 Deallocated Guard Field: 0xFFFF 00:13:14.877 Flush: Supported 00:13:14.877 Reservation: Supported 00:13:14.877 Namespace Sharing Capabilities: Multiple Controllers 00:13:14.877 Size (in LBAs): 131072 (0GiB) 00:13:14.877 Capacity (in LBAs): 131072 (0GiB) 00:13:14.877 Utilization (in LBAs): 131072 (0GiB) 00:13:14.877 NGUID: 185B581E054A40F382F8BC222F808C1A 00:13:14.877 UUID: 185b581e-054a-40f3-82f8-bc222f808c1a 00:13:14.877 Thin Provisioning: Not Supported 00:13:14.877 Per-NS Atomic Units: Yes 00:13:14.877 Atomic Boundary Size (Normal): 0 00:13:14.877 Atomic Boundary Size (PFail): 0 00:13:14.877 Atomic Boundary Offset: 0 00:13:14.877 Maximum Single Source Range Length: 65535 00:13:14.877 Maximum Copy Length: 65535 00:13:14.877 Maximum Source Range Count: 1 00:13:14.877 NGUID/EUI64 Never Reused: No 00:13:14.877 Namespace Write Protected: No 00:13:14.877 Number of LBA Formats: 1 00:13:14.877 Current LBA Format: LBA Format #00 00:13:14.877 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:14.877 00:13:14.877 02:10:29 -- target/nvmf_vfio_user.sh@84 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:20.143 Initializing NVMe Controllers 00:13:20.143 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:20.143 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:20.144 Initialization complete. Launching workers. 00:13:20.144 ======================================================== 00:13:20.144 Latency(us) 00:13:20.144 Device Information : IOPS MiB/s Average min max 00:13:20.144 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 32538.69 127.10 3932.57 1230.16 10596.12 00:13:20.144 ======================================================== 00:13:20.144 Total : 32538.69 127.10 3932.57 1230.16 10596.12 00:13:20.144 00:13:20.144 02:10:34 -- target/nvmf_vfio_user.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:25.438 Initializing NVMe Controllers 00:13:25.438 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:25.438 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:25.438 Initialization complete. Launching workers. 00:13:25.438 ======================================================== 00:13:25.438 Latency(us) 00:13:25.438 Device Information : IOPS MiB/s Average min max 00:13:25.438 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 32616.30 127.41 3923.71 1213.37 11696.90 00:13:25.438 ======================================================== 00:13:25.438 Total : 32616.30 127.41 3923.71 1213.37 11696.90 00:13:25.438 00:13:25.438 02:10:39 -- target/nvmf_vfio_user.sh@86 -- # /home/vagrant/spdk_repo/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:32.000 Initializing NVMe Controllers 00:13:32.000 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:32.000 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:32.000 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:13:32.000 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:13:32.000 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:13:32.000 Initialization complete. Launching workers. 00:13:32.000 Starting thread on core 2 00:13:32.000 Starting thread on core 3 00:13:32.000 Starting thread on core 1 00:13:32.000 02:10:45 -- target/nvmf_vfio_user.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:13:34.532 Initializing NVMe Controllers 00:13:34.532 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:34.532 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:34.532 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:13:34.532 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:13:34.532 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:13:34.532 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:13:34.532 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:13:34.532 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:34.532 Initialization complete. Launching workers. 00:13:34.532 Starting thread on core 1 with urgent priority queue 00:13:34.532 Starting thread on core 2 with urgent priority queue 00:13:34.532 Starting thread on core 3 with urgent priority queue 00:13:34.532 Starting thread on core 0 with urgent priority queue 00:13:34.532 SPDK bdev Controller (SPDK2 ) core 0: 8356.67 IO/s 11.97 secs/100000 ios 00:13:34.532 SPDK bdev Controller (SPDK2 ) core 1: 7372.33 IO/s 13.56 secs/100000 ios 00:13:34.532 SPDK bdev Controller (SPDK2 ) core 2: 8272.67 IO/s 12.09 secs/100000 ios 00:13:34.532 SPDK bdev Controller (SPDK2 ) core 3: 8493.67 IO/s 11.77 secs/100000 ios 00:13:34.532 ======================================================== 00:13:34.532 00:13:34.532 02:10:48 -- target/nvmf_vfio_user.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:34.532 Initializing NVMe Controllers 00:13:34.532 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:34.532 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:34.532 Namespace ID: 1 size: 0GB 00:13:34.532 Initialization complete. 00:13:34.532 INFO: using host memory buffer for IO 00:13:34.532 Hello world! 00:13:34.532 02:10:49 -- target/nvmf_vfio_user.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:35.908 Initializing NVMe Controllers 00:13:35.908 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:35.908 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:35.908 Initialization complete. Launching workers. 00:13:35.908 submit (in ns) avg, min, max = 6283.1, 3642.7, 4019845.5 00:13:35.908 complete (in ns) avg, min, max = 26590.3, 2307.3, 4034118.6 00:13:35.908 00:13:35.908 Submit histogram 00:13:35.908 ================ 00:13:35.908 Range in us Cumulative Count 00:13:35.908 3.636 - 3.651: 0.0074% ( 1) 00:13:35.908 3.651 - 3.665: 0.0372% ( 4) 00:13:35.908 3.665 - 3.680: 0.1415% ( 14) 00:13:35.908 3.680 - 3.695: 1.4669% ( 178) 00:13:35.908 3.695 - 3.709: 4.2293% ( 371) 00:13:35.908 3.709 - 3.724: 7.9896% ( 505) 00:13:35.908 3.724 - 3.753: 20.7148% ( 1709) 00:13:35.908 3.753 - 3.782: 40.1564% ( 2611) 00:13:35.908 3.782 - 3.811: 59.1288% ( 2548) 00:13:35.908 3.811 - 3.840: 74.1400% ( 2016) 00:13:35.908 3.840 - 3.869: 82.2859% ( 1094) 00:13:35.908 3.869 - 3.898: 85.7632% ( 467) 00:13:35.908 3.898 - 3.927: 88.3693% ( 350) 00:13:35.908 3.927 - 3.956: 90.1787% ( 243) 00:13:35.908 3.956 - 3.985: 91.6530% ( 198) 00:13:35.908 3.985 - 4.015: 92.8593% ( 162) 00:13:35.908 4.015 - 4.044: 93.6858% ( 111) 00:13:35.908 4.044 - 4.073: 94.3038% ( 83) 00:13:35.908 4.073 - 4.102: 94.8250% ( 70) 00:13:35.908 4.102 - 4.131: 95.3016% ( 64) 00:13:35.908 4.131 - 4.160: 95.6962% ( 53) 00:13:35.908 4.160 - 4.189: 95.8824% ( 25) 00:13:35.908 4.189 - 4.218: 96.0536% ( 23) 00:13:35.908 4.218 - 4.247: 96.1802% ( 17) 00:13:35.908 4.247 - 4.276: 96.2621% ( 11) 00:13:35.908 4.276 - 4.305: 96.3589% ( 13) 00:13:35.908 4.305 - 4.335: 96.4408% ( 11) 00:13:35.908 4.335 - 4.364: 96.5227% ( 11) 00:13:35.908 4.364 - 4.393: 96.5674% ( 6) 00:13:35.908 4.393 - 4.422: 96.6642% ( 13) 00:13:35.908 4.422 - 4.451: 96.7684% ( 14) 00:13:35.908 4.451 - 4.480: 96.8131% ( 6) 00:13:35.908 4.480 - 4.509: 96.8727% ( 8) 00:13:35.908 4.509 - 4.538: 96.9099% ( 5) 00:13:35.908 4.538 - 4.567: 96.9620% ( 7) 00:13:35.908 4.567 - 4.596: 97.0514% ( 12) 00:13:35.908 4.596 - 4.625: 97.1184% ( 9) 00:13:35.908 4.625 - 4.655: 97.1482% ( 4) 00:13:35.908 4.655 - 4.684: 97.2301% ( 11) 00:13:35.908 4.684 - 4.713: 97.2971% ( 9) 00:13:35.908 4.713 - 4.742: 97.3716% ( 10) 00:13:35.908 4.742 - 4.771: 97.4684% ( 13) 00:13:35.908 4.771 - 4.800: 97.5428% ( 10) 00:13:35.908 4.800 - 4.829: 97.6098% ( 9) 00:13:35.908 4.829 - 4.858: 97.6768% ( 9) 00:13:35.908 4.858 - 4.887: 97.7141% ( 5) 00:13:35.908 4.887 - 4.916: 97.7885% ( 10) 00:13:35.908 4.916 - 4.945: 97.8481% ( 8) 00:13:35.908 4.945 - 4.975: 97.9151% ( 9) 00:13:35.908 4.975 - 5.004: 98.0045% ( 12) 00:13:35.908 5.004 - 5.033: 98.0566% ( 7) 00:13:35.908 5.033 - 5.062: 98.1087% ( 7) 00:13:35.908 5.062 - 5.091: 98.1981% ( 12) 00:13:35.908 5.091 - 5.120: 98.2949% ( 13) 00:13:35.908 5.120 - 5.149: 98.3246% ( 4) 00:13:35.908 5.149 - 5.178: 98.3842% ( 8) 00:13:35.908 5.178 - 5.207: 98.4736% ( 12) 00:13:35.908 5.207 - 5.236: 98.5406% ( 9) 00:13:35.908 5.236 - 5.265: 98.5629% ( 3) 00:13:35.908 5.265 - 5.295: 98.6299% ( 9) 00:13:35.908 5.295 - 5.324: 98.6672% ( 5) 00:13:35.908 5.324 - 5.353: 98.7342% ( 9) 00:13:35.908 5.353 - 5.382: 98.7640% ( 4) 00:13:35.908 5.382 - 5.411: 98.8012% ( 5) 00:13:35.908 5.411 - 5.440: 98.8608% ( 8) 00:13:35.908 5.440 - 5.469: 98.8980% ( 5) 00:13:35.908 5.469 - 5.498: 98.9427% ( 6) 00:13:35.908 5.498 - 5.527: 98.9650% ( 3) 00:13:35.908 5.527 - 5.556: 99.0171% ( 7) 00:13:35.908 5.556 - 5.585: 99.0320% ( 2) 00:13:35.908 5.615 - 5.644: 99.0395% ( 1) 00:13:35.908 5.644 - 5.673: 99.0692% ( 4) 00:13:35.908 5.673 - 5.702: 99.0916% ( 3) 00:13:35.908 5.702 - 5.731: 99.1065% ( 2) 00:13:35.908 5.731 - 5.760: 99.1214% ( 2) 00:13:35.908 5.760 - 5.789: 99.1437% ( 3) 00:13:35.908 5.789 - 5.818: 99.1512% ( 1) 00:13:35.908 5.818 - 5.847: 99.1660% ( 2) 00:13:35.908 5.876 - 5.905: 99.1735% ( 1) 00:13:35.908 5.905 - 5.935: 99.2033% ( 4) 00:13:35.908 5.935 - 5.964: 99.2107% ( 1) 00:13:35.908 5.964 - 5.993: 99.2256% ( 2) 00:13:35.908 5.993 - 6.022: 99.2331% ( 1) 00:13:35.908 6.022 - 6.051: 99.2480% ( 2) 00:13:35.908 6.080 - 6.109: 99.2554% ( 1) 00:13:35.908 6.109 - 6.138: 99.2628% ( 1) 00:13:35.908 6.255 - 6.284: 99.2703% ( 1) 00:13:35.908 6.313 - 6.342: 99.2777% ( 1) 00:13:35.908 6.458 - 6.487: 99.2852% ( 1) 00:13:35.908 6.516 - 6.545: 99.2926% ( 1) 00:13:35.908 6.633 - 6.662: 99.3001% ( 1) 00:13:35.908 6.691 - 6.720: 99.3075% ( 1) 00:13:35.908 6.749 - 6.778: 99.3150% ( 1) 00:13:35.908 7.040 - 7.069: 99.3224% ( 1) 00:13:35.908 7.447 - 7.505: 99.3448% ( 3) 00:13:35.908 7.505 - 7.564: 99.3522% ( 1) 00:13:35.908 7.680 - 7.738: 99.3596% ( 1) 00:13:35.908 7.738 - 7.796: 99.3671% ( 1) 00:13:35.908 7.913 - 7.971: 99.3745% ( 1) 00:13:35.908 8.262 - 8.320: 99.3820% ( 1) 00:13:35.908 8.436 - 8.495: 99.3969% ( 2) 00:13:35.908 8.902 - 8.960: 99.4118% ( 2) 00:13:35.908 9.018 - 9.076: 99.4192% ( 1) 00:13:35.908 9.135 - 9.193: 99.4267% ( 1) 00:13:35.908 9.309 - 9.367: 99.4341% ( 1) 00:13:35.908 9.367 - 9.425: 99.4490% ( 2) 00:13:35.908 9.425 - 9.484: 99.4564% ( 1) 00:13:35.909 9.484 - 9.542: 99.4639% ( 1) 00:13:35.909 9.542 - 9.600: 99.4788% ( 2) 00:13:35.909 9.658 - 9.716: 99.4862% ( 1) 00:13:35.909 9.716 - 9.775: 99.4937% ( 1) 00:13:35.909 9.775 - 9.833: 99.5011% ( 1) 00:13:35.909 9.891 - 9.949: 99.5235% ( 3) 00:13:35.909 9.949 - 10.007: 99.5309% ( 1) 00:13:35.909 10.007 - 10.065: 99.5532% ( 3) 00:13:35.909 10.065 - 10.124: 99.5607% ( 1) 00:13:35.909 10.182 - 10.240: 99.5681% ( 1) 00:13:35.909 10.240 - 10.298: 99.5830% ( 2) 00:13:35.909 10.298 - 10.356: 99.5905% ( 1) 00:13:35.909 10.356 - 10.415: 99.5979% ( 1) 00:13:35.909 10.415 - 10.473: 99.6054% ( 1) 00:13:35.909 10.473 - 10.531: 99.6128% ( 1) 00:13:35.909 10.589 - 10.647: 99.6351% ( 3) 00:13:35.909 10.705 - 10.764: 99.6426% ( 1) 00:13:35.909 10.822 - 10.880: 99.6575% ( 2) 00:13:35.909 10.938 - 10.996: 99.6798% ( 3) 00:13:35.909 10.996 - 11.055: 99.6873% ( 1) 00:13:35.909 11.113 - 11.171: 99.6947% ( 1) 00:13:35.909 11.229 - 11.287: 99.7022% ( 1) 00:13:35.909 11.462 - 11.520: 99.7096% ( 1) 00:13:35.909 11.520 - 11.578: 99.7171% ( 1) 00:13:35.909 11.811 - 11.869: 99.7394% ( 3) 00:13:35.909 11.869 - 11.927: 99.7468% ( 1) 00:13:35.909 11.927 - 11.985: 99.7543% ( 1) 00:13:35.909 11.985 - 12.044: 99.7617% ( 1) 00:13:35.909 12.160 - 12.218: 99.7692% ( 1) 00:13:35.909 12.218 - 12.276: 99.7766% ( 1) 00:13:35.909 12.335 - 12.393: 99.7915% ( 2) 00:13:35.909 12.451 - 12.509: 99.7990% ( 1) 00:13:35.909 12.684 - 12.742: 99.8064% ( 1) 00:13:35.909 12.916 - 12.975: 99.8138% ( 1) 00:13:35.909 13.265 - 13.324: 99.8213% ( 1) 00:13:35.909 13.382 - 13.440: 99.8287% ( 1) 00:13:35.909 13.498 - 13.556: 99.8362% ( 1) 00:13:35.909 13.556 - 13.615: 99.8436% ( 1) 00:13:35.909 13.731 - 13.789: 99.8511% ( 1) 00:13:35.909 13.905 - 13.964: 99.8585% ( 1) 00:13:35.909 14.255 - 14.313: 99.8660% ( 1) 00:13:35.909 14.720 - 14.778: 99.8734% ( 1) 00:13:35.909 15.244 - 15.360: 99.8883% ( 2) 00:13:35.909 16.756 - 16.873: 99.8958% ( 1) 00:13:35.909 17.338 - 17.455: 99.9032% ( 1) 00:13:35.909 18.851 - 18.967: 99.9106% ( 1) 00:13:35.909 18.967 - 19.084: 99.9181% ( 1) 00:13:35.909 19.898 - 20.015: 99.9255% ( 1) 00:13:35.909 24.436 - 24.553: 99.9330% ( 1) 00:13:35.909 28.160 - 28.276: 99.9404% ( 1) 00:13:35.909 3991.738 - 4021.527: 100.0000% ( 8) 00:13:35.909 00:13:35.909 Complete histogram 00:13:35.909 ================== 00:13:35.909 Range in us Cumulative Count 00:13:35.909 2.298 - 2.313: 0.1713% ( 23) 00:13:35.909 2.313 - 2.327: 28.6821% ( 3829) 00:13:35.909 2.327 - 2.342: 79.9851% ( 6890) 00:13:35.909 2.342 - 2.356: 86.3366% ( 853) 00:13:35.909 2.356 - 2.371: 87.4162% ( 145) 00:13:35.909 2.371 - 2.385: 89.7096% ( 308) 00:13:35.909 2.385 - 2.400: 92.1370% ( 326) 00:13:35.909 2.400 - 2.415: 94.1772% ( 274) 00:13:35.909 2.415 - 2.429: 95.0186% ( 113) 00:13:35.909 2.429 - 2.444: 95.5994% ( 78) 00:13:35.909 2.444 - 2.458: 95.8824% ( 38) 00:13:35.909 2.458 - 2.473: 96.0908% ( 28) 00:13:35.909 2.473 - 2.487: 96.2472% ( 21) 00:13:35.909 2.487 - 2.502: 96.3663% ( 16) 00:13:35.909 2.502 - 2.516: 96.4483% ( 11) 00:13:35.909 2.516 - 2.531: 96.5302% ( 11) 00:13:35.909 2.531 - 2.545: 96.5450% ( 2) 00:13:35.909 2.545 - 2.560: 96.5748% ( 4) 00:13:35.909 2.560 - 2.575: 96.5897% ( 2) 00:13:35.909 2.575 - 2.589: 96.6270% ( 5) 00:13:35.909 2.589 - 2.604: 96.6493% ( 3) 00:13:35.909 2.604 - 2.618: 96.7014% ( 7) 00:13:35.909 2.618 - 2.633: 96.7163% ( 2) 00:13:35.909 2.633 - 2.647: 96.7535% ( 5) 00:13:35.909 2.647 - 2.662: 96.7684% ( 2) 00:13:35.909 2.662 - 2.676: 96.7982% ( 4) 00:13:35.909 2.676 - 2.691: 96.8280% ( 4) 00:13:35.909 2.691 - 2.705: 96.8801% ( 7) 00:13:35.909 2.705 - 2.720: 96.8950% ( 2) 00:13:35.909 2.720 - 2.735: 96.9248% ( 4) 00:13:35.909 2.735 - 2.749: 96.9620% ( 5) 00:13:35.909 2.749 - 2.764: 97.0216% ( 8) 00:13:35.909 2.764 - 2.778: 97.0812% ( 8) 00:13:35.909 2.778 - 2.793: 97.1109% ( 4) 00:13:35.909 2.793 - 2.807: 97.1556% ( 6) 00:13:35.909 2.807 - 2.822: 97.2077% ( 7) 00:13:35.909 2.822 - 2.836: 97.2450% ( 5) 00:13:35.909 2.836 - 2.851: 97.2897% ( 6) 00:13:35.909 2.851 - 2.865: 97.3567% ( 9) 00:13:35.909 2.865 - 2.880: 97.3939% ( 5) 00:13:35.909 2.880 - 2.895: 97.4460% ( 7) 00:13:35.909 2.895 - 2.909: 97.4758% ( 4) 00:13:35.909 2.909 - 2.924: 97.5205% ( 6) 00:13:35.909 2.924 - 2.938: 97.5503% ( 4) 00:13:35.909 2.938 - 2.953: 97.5949% ( 6) 00:13:35.909 2.953 - 2.967: 97.6173% ( 3) 00:13:35.909 2.967 - 2.982: 97.6545% ( 5) 00:13:35.909 2.982 - 2.996: 97.7066% ( 7) 00:13:35.909 2.996 - 3.011: 97.7662% ( 8) 00:13:35.909 3.011 - 3.025: 97.8407% ( 10) 00:13:35.909 3.025 - 3.040: 97.8555% ( 2) 00:13:35.909 3.040 - 3.055: 97.8779% ( 3) 00:13:35.909 3.055 - 3.069: 97.9077% ( 4) 00:13:35.909 3.069 - 3.084: 97.9449% ( 5) 00:13:35.909 3.098 - 3.113: 98.0045% ( 8) 00:13:35.909 3.113 - 3.127: 98.0566% ( 7) 00:13:35.909 3.127 - 3.142: 98.0864% ( 4) 00:13:35.909 3.142 - 3.156: 98.1162% ( 4) 00:13:35.909 3.156 - 3.171: 98.1385% ( 3) 00:13:35.909 3.171 - 3.185: 98.1757% ( 5) 00:13:35.909 3.185 - 3.200: 98.1981% ( 3) 00:13:35.909 3.200 - 3.215: 98.2204% ( 3) 00:13:35.909 3.215 - 3.229: 98.2502% ( 4) 00:13:35.909 3.229 - 3.244: 98.2874% ( 5) 00:13:35.909 3.244 - 3.258: 98.3023% ( 2) 00:13:35.909 3.258 - 3.273: 98.3470% ( 6) 00:13:35.909 3.273 - 3.287: 98.3619% ( 2) 00:13:35.909 3.287 - 3.302: 98.3768% ( 2) 00:13:35.909 3.302 - 3.316: 98.4214% ( 6) 00:13:35.909 3.316 - 3.331: 98.4438% ( 3) 00:13:35.909 3.331 - 3.345: 98.4810% ( 5) 00:13:35.909 3.345 - 3.360: 98.5034% ( 3) 00:13:35.909 3.360 - 3.375: 98.5331% ( 4) 00:13:35.909 3.375 - 3.389: 98.5704% ( 5) 00:13:35.909 3.389 - 3.404: 98.5853% ( 2) 00:13:35.909 3.404 - 3.418: 98.5927% ( 1) 00:13:35.909 3.418 - 3.433: 98.6150% ( 3) 00:13:35.909 3.433 - 3.447: 98.6299% ( 2) 00:13:35.909 3.447 - 3.462: 98.6374% ( 1) 00:13:35.909 3.462 - 3.476: 98.6448% ( 1) 00:13:35.909 3.476 - 3.491: 98.6597% ( 2) 00:13:35.909 3.491 - 3.505: 98.6746% ( 2) 00:13:35.909 3.505 - 3.520: 98.6821% ( 1) 00:13:35.909 3.520 - 3.535: 98.6895% ( 1) 00:13:35.909 3.535 - 3.549: 98.7193% ( 4) 00:13:35.909 3.549 - 3.564: 98.7342% ( 2) 00:13:35.909 3.564 - 3.578: 98.7491% ( 2) 00:13:35.909 3.607 - 3.622: 98.7640% ( 2) 00:13:35.909 3.622 - 3.636: 98.7714% ( 1) 00:13:35.909 3.636 - 3.651: 98.7789% ( 1) 00:13:35.909 3.651 - 3.665: 98.7937% ( 2) 00:13:35.909 3.665 - 3.680: 98.8086% ( 2) 00:13:35.909 3.680 - 3.695: 98.8161% ( 1) 00:13:35.909 3.695 - 3.709: 98.8235% ( 1) 00:13:35.909 3.724 - 3.753: 98.8608% ( 5) 00:13:35.909 3.753 - 3.782: 98.8757% ( 2) 00:13:35.909 3.782 - 3.811: 98.8831% ( 1) 00:13:35.909 3.840 - 3.869: 98.8980% ( 2) 00:13:35.909 3.869 - 3.898: 98.9054% ( 1) 00:13:35.909 3.898 - 3.927: 98.9203% ( 2) 00:13:35.909 3.927 - 3.956: 98.9278% ( 1) 00:13:35.909 4.073 - 4.102: 98.9427% ( 2) 00:13:35.909 4.102 - 4.131: 98.9501% ( 1) 00:13:35.909 4.131 - 4.160: 98.9576% ( 1) 00:13:35.909 4.189 - 4.218: 98.9799% ( 3) 00:13:35.909 4.276 - 4.305: 98.9873% ( 1) 00:13:35.909 4.305 - 4.335: 98.9948% ( 1) 00:13:35.909 4.335 - 4.364: 99.0171% ( 3) 00:13:35.909 4.364 - 4.393: 99.0246% ( 1) 00:13:35.909 4.393 - 4.422: 99.0320% ( 1) 00:13:35.909 4.422 - 4.451: 99.0395% ( 1) 00:13:35.909 4.509 - 4.538: 99.0618% ( 3) 00:13:35.909 4.538 - 4.567: 99.0767% ( 2) 00:13:35.909 4.596 - 4.625: 99.0841% ( 1) 00:13:35.909 4.655 - 4.684: 99.0990% ( 2) 00:13:35.909 4.887 - 4.916: 99.1139% ( 2) 00:13:35.909 4.975 - 5.004: 99.1288% ( 2) 00:13:35.909 5.149 - 5.178: 99.1363% ( 1) 00:13:35.909 5.178 - 5.207: 99.1512% ( 2) 00:13:35.909 5.236 - 5.265: 99.1660% ( 2) 00:13:35.909 5.265 - 5.295: 99.1735% ( 1) 00:13:35.909 5.789 - 5.818: 99.1809% ( 1) 00:13:35.909 7.622 - 7.680: 99.1884% ( 1) 00:13:35.909 7.796 - 7.855: 99.1958% ( 1) 00:13:35.909 7.855 - 7.913: 99.2033% ( 1) 00:13:35.909 8.029 - 8.087: 99.2107% ( 1) 00:13:35.909 8.204 - 8.262: 99.2182% ( 1) 00:13:35.909 8.436 - 8.495: 99.2331% ( 2) 00:13:35.909 8.495 - 8.553: 99.2405% ( 1) 00:13:35.909 8.553 - 8.611: 99.2480% ( 1) 00:13:35.909 9.425 - 9.484: 99.2554% ( 1) 00:13:35.909 9.600 - 9.658: 99.2628% ( 1) 00:13:35.909 9.658 - 9.716: 99.2703% ( 1) 00:13:35.909 9.716 - 9.775: 99.2777% ( 1) 00:13:35.909 9.775 - 9.833: 99.2852% ( 1) 00:13:35.909 9.833 - 9.891: 99.2926% ( 1) 00:13:35.909 10.182 - 10.240: 99.3075% ( 2) 00:13:35.909 10.531 - 10.589: 99.3150% ( 1) 00:13:35.909 10.938 - 10.996: 99.3224% ( 1) 00:13:35.909 11.055 - 11.113: 99.3299% ( 1) 00:13:35.909 11.113 - 11.171: 99.3373% ( 1) 00:13:35.909 13.673 - 13.731: 99.3448% ( 1) 00:13:35.910 16.873 - 16.989: 99.3522% ( 1) 00:13:35.910 17.455 - 17.571: 99.3596% ( 1) 00:13:35.910 17.804 - 17.920: 99.3671% ( 1) 00:13:35.910 19.665 - 19.782: 99.3745% ( 1) 00:13:35.910 23.622 - 23.738: 99.3820% ( 1) 00:13:35.910 25.018 - 25.135: 99.3894% ( 1) 00:13:35.910 30.022 - 30.255: 99.3969% ( 1) 00:13:35.910 3991.738 - 4021.527: 99.9628% ( 76) 00:13:35.910 4021.527 - 4051.316: 100.0000% ( 5) 00:13:35.910 00:13:35.910 02:10:50 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:13:35.910 02:10:50 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:35.910 02:10:50 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:13:35.910 02:10:50 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:13:35.910 02:10:50 -- target/nvmf_vfio_user.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:36.168 [ 00:13:36.168 { 00:13:36.168 "allow_any_host": true, 00:13:36.168 "hosts": [], 00:13:36.168 "listen_addresses": [], 00:13:36.168 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:36.168 "subtype": "Discovery" 00:13:36.168 }, 00:13:36.168 { 00:13:36.168 "allow_any_host": true, 00:13:36.168 "hosts": [], 00:13:36.168 "listen_addresses": [ 00:13:36.168 { 00:13:36.168 "adrfam": "IPv4", 00:13:36.168 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:36.168 "transport": "VFIOUSER", 00:13:36.168 "trsvcid": "0", 00:13:36.168 "trtype": "VFIOUSER" 00:13:36.168 } 00:13:36.168 ], 00:13:36.168 "max_cntlid": 65519, 00:13:36.168 "max_namespaces": 32, 00:13:36.168 "min_cntlid": 1, 00:13:36.168 "model_number": "SPDK bdev Controller", 00:13:36.168 "namespaces": [ 00:13:36.168 { 00:13:36.168 "bdev_name": "Malloc1", 00:13:36.168 "name": "Malloc1", 00:13:36.168 "nguid": "B97EA14F7BC24C1B9C2B0069EA331B54", 00:13:36.168 "nsid": 1, 00:13:36.168 "uuid": "b97ea14f-7bc2-4c1b-9c2b-0069ea331b54" 00:13:36.168 }, 00:13:36.168 { 00:13:36.168 "bdev_name": "Malloc3", 00:13:36.168 "name": "Malloc3", 00:13:36.168 "nguid": "6779340765A441E99D7979D1CC155CD2", 00:13:36.168 "nsid": 2, 00:13:36.168 "uuid": "67793407-65a4-41e9-9d79-79d1cc155cd2" 00:13:36.168 } 00:13:36.168 ], 00:13:36.168 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:36.168 "serial_number": "SPDK1", 00:13:36.168 "subtype": "NVMe" 00:13:36.168 }, 00:13:36.168 { 00:13:36.168 "allow_any_host": true, 00:13:36.168 "hosts": [], 00:13:36.168 "listen_addresses": [ 00:13:36.168 { 00:13:36.168 "adrfam": "IPv4", 00:13:36.168 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:36.168 "transport": "VFIOUSER", 00:13:36.168 "trsvcid": "0", 00:13:36.168 "trtype": "VFIOUSER" 00:13:36.168 } 00:13:36.168 ], 00:13:36.168 "max_cntlid": 65519, 00:13:36.168 "max_namespaces": 32, 00:13:36.168 "min_cntlid": 1, 00:13:36.168 "model_number": "SPDK bdev Controller", 00:13:36.168 "namespaces": [ 00:13:36.168 { 00:13:36.168 "bdev_name": "Malloc2", 00:13:36.169 "name": "Malloc2", 00:13:36.169 "nguid": "185B581E054A40F382F8BC222F808C1A", 00:13:36.169 "nsid": 1, 00:13:36.169 "uuid": "185b581e-054a-40f3-82f8-bc222f808c1a" 00:13:36.169 } 00:13:36.169 ], 00:13:36.169 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:36.169 "serial_number": "SPDK2", 00:13:36.169 "subtype": "NVMe" 00:13:36.169 } 00:13:36.169 ] 00:13:36.169 02:10:50 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:36.169 02:10:50 -- target/nvmf_vfio_user.sh@34 -- # aerpid=69743 00:13:36.169 02:10:50 -- target/nvmf_vfio_user.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:13:36.169 02:10:50 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:36.169 02:10:50 -- common/autotest_common.sh@1244 -- # local i=0 00:13:36.169 02:10:50 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:36.169 02:10:50 -- common/autotest_common.sh@1246 -- # '[' 0 -lt 200 ']' 00:13:36.169 02:10:50 -- common/autotest_common.sh@1247 -- # i=1 00:13:36.169 02:10:50 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:13:36.427 02:10:50 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:36.427 02:10:50 -- common/autotest_common.sh@1246 -- # '[' 1 -lt 200 ']' 00:13:36.427 02:10:50 -- common/autotest_common.sh@1247 -- # i=2 00:13:36.427 02:10:50 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:13:36.427 02:10:50 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:36.427 02:10:50 -- common/autotest_common.sh@1246 -- # '[' 2 -lt 200 ']' 00:13:36.427 02:10:50 -- common/autotest_common.sh@1247 -- # i=3 00:13:36.427 02:10:50 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:13:36.427 02:10:50 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:36.427 02:10:50 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:36.427 02:10:50 -- common/autotest_common.sh@1255 -- # return 0 00:13:36.427 02:10:50 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:36.427 02:10:50 -- target/nvmf_vfio_user.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:13:36.994 Malloc4 00:13:36.994 02:10:51 -- target/nvmf_vfio_user.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:13:37.252 02:10:51 -- target/nvmf_vfio_user.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:37.252 Asynchronous Event Request test 00:13:37.252 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:37.252 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:37.252 Registering asynchronous event callbacks... 00:13:37.252 Starting namespace attribute notice tests for all controllers... 00:13:37.252 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:37.252 aer_cb - Changed Namespace 00:13:37.252 Cleaning up... 00:13:37.511 [ 00:13:37.511 { 00:13:37.511 "allow_any_host": true, 00:13:37.511 "hosts": [], 00:13:37.511 "listen_addresses": [], 00:13:37.511 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:37.511 "subtype": "Discovery" 00:13:37.511 }, 00:13:37.511 { 00:13:37.511 "allow_any_host": true, 00:13:37.511 "hosts": [], 00:13:37.511 "listen_addresses": [ 00:13:37.511 { 00:13:37.511 "adrfam": "IPv4", 00:13:37.511 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:37.511 "transport": "VFIOUSER", 00:13:37.511 "trsvcid": "0", 00:13:37.511 "trtype": "VFIOUSER" 00:13:37.511 } 00:13:37.511 ], 00:13:37.511 "max_cntlid": 65519, 00:13:37.511 "max_namespaces": 32, 00:13:37.511 "min_cntlid": 1, 00:13:37.511 "model_number": "SPDK bdev Controller", 00:13:37.511 "namespaces": [ 00:13:37.511 { 00:13:37.511 "bdev_name": "Malloc1", 00:13:37.511 "name": "Malloc1", 00:13:37.511 "nguid": "B97EA14F7BC24C1B9C2B0069EA331B54", 00:13:37.511 "nsid": 1, 00:13:37.511 "uuid": "b97ea14f-7bc2-4c1b-9c2b-0069ea331b54" 00:13:37.511 }, 00:13:37.511 { 00:13:37.511 "bdev_name": "Malloc3", 00:13:37.511 "name": "Malloc3", 00:13:37.511 "nguid": "6779340765A441E99D7979D1CC155CD2", 00:13:37.511 "nsid": 2, 00:13:37.511 "uuid": "67793407-65a4-41e9-9d79-79d1cc155cd2" 00:13:37.511 } 00:13:37.511 ], 00:13:37.511 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:37.511 "serial_number": "SPDK1", 00:13:37.511 "subtype": "NVMe" 00:13:37.511 }, 00:13:37.511 { 00:13:37.511 "allow_any_host": true, 00:13:37.511 "hosts": [], 00:13:37.511 "listen_addresses": [ 00:13:37.511 { 00:13:37.511 "adrfam": "IPv4", 00:13:37.511 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:37.511 "transport": "VFIOUSER", 00:13:37.511 "trsvcid": "0", 00:13:37.511 "trtype": "VFIOUSER" 00:13:37.511 } 00:13:37.511 ], 00:13:37.511 "max_cntlid": 65519, 00:13:37.511 "max_namespaces": 32, 00:13:37.511 "min_cntlid": 1, 00:13:37.511 "model_number": "SPDK bdev Controller", 00:13:37.511 "namespaces": [ 00:13:37.511 { 00:13:37.511 "bdev_name": "Malloc2", 00:13:37.511 "name": "Malloc2", 00:13:37.511 "nguid": "185B581E054A40F382F8BC222F808C1A", 00:13:37.511 "nsid": 1, 00:13:37.511 "uuid": "185b581e-054a-40f3-82f8-bc222f808c1a" 00:13:37.511 }, 00:13:37.511 { 00:13:37.511 "bdev_name": "Malloc4", 00:13:37.511 "name": "Malloc4", 00:13:37.511 "nguid": "E4AD7B2F5F424E2BA439412708727517", 00:13:37.511 "nsid": 2, 00:13:37.511 "uuid": "e4ad7b2f-5f42-4e2b-a439-412708727517" 00:13:37.511 } 00:13:37.511 ], 00:13:37.511 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:37.511 "serial_number": "SPDK2", 00:13:37.511 "subtype": "NVMe" 00:13:37.511 } 00:13:37.511 ] 00:13:37.511 02:10:51 -- target/nvmf_vfio_user.sh@44 -- # wait 69743 00:13:37.511 02:10:51 -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:13:37.511 02:10:51 -- target/nvmf_vfio_user.sh@95 -- # killprocess 69064 00:13:37.511 02:10:51 -- common/autotest_common.sh@926 -- # '[' -z 69064 ']' 00:13:37.511 02:10:51 -- common/autotest_common.sh@930 -- # kill -0 69064 00:13:37.511 02:10:51 -- common/autotest_common.sh@931 -- # uname 00:13:37.511 02:10:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:37.511 02:10:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69064 00:13:37.511 killing process with pid 69064 00:13:37.511 02:10:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:37.511 02:10:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:37.511 02:10:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69064' 00:13:37.511 02:10:51 -- common/autotest_common.sh@945 -- # kill 69064 00:13:37.511 [2024-05-14 02:10:51.908774] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:13:37.511 02:10:51 -- common/autotest_common.sh@950 -- # wait 69064 00:13:37.770 02:10:52 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:37.770 02:10:52 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:37.770 02:10:52 -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:13:37.770 02:10:52 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:13:37.770 02:10:52 -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:13:37.770 02:10:52 -- target/nvmf_vfio_user.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:13:37.770 02:10:52 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=69792 00:13:37.770 Process pid: 69792 00:13:37.770 02:10:52 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 69792' 00:13:37.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:37.770 02:10:52 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:37.770 02:10:52 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 69792 00:13:37.770 02:10:52 -- common/autotest_common.sh@819 -- # '[' -z 69792 ']' 00:13:37.770 02:10:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:37.770 02:10:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:37.770 02:10:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:37.770 02:10:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:37.770 02:10:52 -- common/autotest_common.sh@10 -- # set +x 00:13:37.770 [2024-05-14 02:10:52.210682] thread.c:2927:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:13:37.770 [2024-05-14 02:10:52.213471] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:13:37.770 [2024-05-14 02:10:52.213576] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:37.770 [2024-05-14 02:10:52.358643] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:38.029 [2024-05-14 02:10:52.418014] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:38.029 [2024-05-14 02:10:52.418150] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:38.029 [2024-05-14 02:10:52.418164] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:38.029 [2024-05-14 02:10:52.418173] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:38.029 [2024-05-14 02:10:52.418258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:38.029 [2024-05-14 02:10:52.418702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:38.029 [2024-05-14 02:10:52.418817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:38.029 [2024-05-14 02:10:52.418822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.029 [2024-05-14 02:10:52.466272] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_0) to intr mode from intr mode. 00:13:38.029 [2024-05-14 02:10:52.473002] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_1) to intr mode from intr mode. 00:13:38.029 [2024-05-14 02:10:52.473165] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_2) to intr mode from intr mode. 00:13:38.029 [2024-05-14 02:10:52.473849] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:13:38.029 [2024-05-14 02:10:52.474012] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_3) to intr mode from intr mode. 00:13:38.963 02:10:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:38.963 02:10:53 -- common/autotest_common.sh@852 -- # return 0 00:13:38.963 02:10:53 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:39.902 02:10:54 -- target/nvmf_vfio_user.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:13:39.902 02:10:54 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:39.902 02:10:54 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:39.902 02:10:54 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:39.902 02:10:54 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:39.902 02:10:54 -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:40.469 Malloc1 00:13:40.469 02:10:54 -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:40.728 02:10:55 -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:40.987 02:10:55 -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:41.246 02:10:55 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:41.246 02:10:55 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:41.246 02:10:55 -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:41.504 Malloc2 00:13:41.504 02:10:55 -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:41.761 02:10:56 -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:42.020 02:10:56 -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:42.278 02:10:56 -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:13:42.278 02:10:56 -- target/nvmf_vfio_user.sh@95 -- # killprocess 69792 00:13:42.278 02:10:56 -- common/autotest_common.sh@926 -- # '[' -z 69792 ']' 00:13:42.278 02:10:56 -- common/autotest_common.sh@930 -- # kill -0 69792 00:13:42.278 02:10:56 -- common/autotest_common.sh@931 -- # uname 00:13:42.278 02:10:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:42.278 02:10:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69792 00:13:42.278 02:10:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:42.278 killing process with pid 69792 00:13:42.278 02:10:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:42.278 02:10:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69792' 00:13:42.278 02:10:56 -- common/autotest_common.sh@945 -- # kill 69792 00:13:42.278 02:10:56 -- common/autotest_common.sh@950 -- # wait 69792 00:13:42.536 02:10:56 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:42.536 02:10:56 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:42.536 00:13:42.536 real 0m55.288s 00:13:42.536 user 3m38.528s 00:13:42.536 sys 0m3.754s 00:13:42.536 02:10:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:42.536 02:10:56 -- common/autotest_common.sh@10 -- # set +x 00:13:42.536 ************************************ 00:13:42.536 END TEST nvmf_vfio_user 00:13:42.536 ************************************ 00:13:42.536 02:10:57 -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user_nvme_compliance /home/vagrant/spdk_repo/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:42.536 02:10:57 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:42.536 02:10:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:42.536 02:10:57 -- common/autotest_common.sh@10 -- # set +x 00:13:42.536 ************************************ 00:13:42.536 START TEST nvmf_vfio_user_nvme_compliance 00:13:42.536 ************************************ 00:13:42.536 02:10:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:42.536 * Looking for test storage... 00:13:42.536 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/compliance 00:13:42.536 02:10:57 -- compliance/compliance.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:42.536 02:10:57 -- nvmf/common.sh@7 -- # uname -s 00:13:42.536 02:10:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:42.537 02:10:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:42.537 02:10:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:42.537 02:10:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:42.537 02:10:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:42.537 02:10:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:42.537 02:10:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:42.537 02:10:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:42.537 02:10:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:42.537 02:10:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:42.537 02:10:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:13:42.537 02:10:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:13:42.537 02:10:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:42.537 02:10:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:42.537 02:10:57 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:42.537 02:10:57 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:42.537 02:10:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:42.537 02:10:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:42.537 02:10:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:42.537 02:10:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.537 02:10:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.537 02:10:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.537 02:10:57 -- paths/export.sh@5 -- # export PATH 00:13:42.537 02:10:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.537 02:10:57 -- nvmf/common.sh@46 -- # : 0 00:13:42.537 02:10:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:42.537 02:10:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:42.537 02:10:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:42.537 02:10:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:42.537 02:10:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:42.537 02:10:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:42.537 02:10:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:42.537 02:10:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:42.537 02:10:57 -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:42.537 02:10:57 -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:42.537 02:10:57 -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:13:42.537 02:10:57 -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:13:42.537 02:10:57 -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:13:42.537 02:10:57 -- compliance/compliance.sh@20 -- # nvmfpid=69987 00:13:42.537 02:10:57 -- compliance/compliance.sh@19 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:42.537 Process pid: 69987 00:13:42.537 02:10:57 -- compliance/compliance.sh@21 -- # echo 'Process pid: 69987' 00:13:42.537 02:10:57 -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:42.537 02:10:57 -- compliance/compliance.sh@24 -- # waitforlisten 69987 00:13:42.537 02:10:57 -- common/autotest_common.sh@819 -- # '[' -z 69987 ']' 00:13:42.537 02:10:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:42.537 02:10:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:42.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:42.537 02:10:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:42.537 02:10:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:42.537 02:10:57 -- common/autotest_common.sh@10 -- # set +x 00:13:42.796 [2024-05-14 02:10:57.181144] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:13:42.796 [2024-05-14 02:10:57.181259] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:42.796 [2024-05-14 02:10:57.317349] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:43.055 [2024-05-14 02:10:57.389864] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:43.055 [2024-05-14 02:10:57.390049] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:43.055 [2024-05-14 02:10:57.390067] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:43.055 [2024-05-14 02:10:57.390078] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:43.055 [2024-05-14 02:10:57.390181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:43.055 [2024-05-14 02:10:57.390284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:43.055 [2024-05-14 02:10:57.390299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:43.622 02:10:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:43.622 02:10:58 -- common/autotest_common.sh@852 -- # return 0 00:13:43.622 02:10:58 -- compliance/compliance.sh@26 -- # sleep 1 00:13:44.998 02:10:59 -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:44.998 02:10:59 -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:13:44.998 02:10:59 -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:44.998 02:10:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.998 02:10:59 -- common/autotest_common.sh@10 -- # set +x 00:13:44.998 02:10:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.998 02:10:59 -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:13:44.998 02:10:59 -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:44.998 02:10:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.998 02:10:59 -- common/autotest_common.sh@10 -- # set +x 00:13:44.998 malloc0 00:13:44.998 02:10:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.998 02:10:59 -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:13:44.998 02:10:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.998 02:10:59 -- common/autotest_common.sh@10 -- # set +x 00:13:44.998 02:10:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.998 02:10:59 -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:44.998 02:10:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.998 02:10:59 -- common/autotest_common.sh@10 -- # set +x 00:13:44.998 02:10:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.998 02:10:59 -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:44.998 02:10:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.999 02:10:59 -- common/autotest_common.sh@10 -- # set +x 00:13:44.999 02:10:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.999 02:10:59 -- compliance/compliance.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:13:44.999 00:13:44.999 00:13:44.999 CUnit - A unit testing framework for C - Version 2.1-3 00:13:44.999 http://cunit.sourceforge.net/ 00:13:44.999 00:13:44.999 00:13:44.999 Suite: nvme_compliance 00:13:44.999 Test: admin_identify_ctrlr_verify_dptr ...[2024-05-14 02:10:59.464981] vfio_user.c: 789:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:13:44.999 [2024-05-14 02:10:59.465035] vfio_user.c:5484:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:13:44.999 [2024-05-14 02:10:59.465047] vfio_user.c:5576:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:13:44.999 passed 00:13:45.257 Test: admin_identify_ctrlr_verify_fused ...passed 00:13:45.257 Test: admin_identify_ns ...[2024-05-14 02:10:59.727801] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:13:45.257 [2024-05-14 02:10:59.735786] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:13:45.257 passed 00:13:45.515 Test: admin_get_features_mandatory_features ...passed 00:13:45.515 Test: admin_get_features_optional_features ...passed 00:13:45.773 Test: admin_set_features_number_of_queues ...passed 00:13:45.773 Test: admin_get_log_page_mandatory_logs ...passed 00:13:46.031 Test: admin_get_log_page_with_lpo ...[2024-05-14 02:11:00.392788] ctrlr.c:2546:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:13:46.031 passed 00:13:46.031 Test: fabric_property_get ...passed 00:13:46.031 Test: admin_delete_io_sq_use_admin_qid ...[2024-05-14 02:11:00.591698] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:13:46.289 passed 00:13:46.289 Test: admin_delete_io_sq_delete_sq_twice ...[2024-05-14 02:11:00.772784] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:46.289 [2024-05-14 02:11:00.788785] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:46.289 passed 00:13:46.547 Test: admin_delete_io_cq_use_admin_qid ...[2024-05-14 02:11:00.884297] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:13:46.547 passed 00:13:46.547 Test: admin_delete_io_cq_delete_cq_first ...[2024-05-14 02:11:01.051783] vfio_user.c:2310:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:46.547 [2024-05-14 02:11:01.075784] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:46.547 passed 00:13:46.805 Test: admin_create_io_cq_verify_iv_pc ...[2024-05-14 02:11:01.174190] vfio_user.c:2150:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:13:46.805 [2024-05-14 02:11:01.174251] vfio_user.c:2144:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:13:46.805 passed 00:13:46.805 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-05-14 02:11:01.357778] vfio_user.c:2231:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:13:46.805 [2024-05-14 02:11:01.365777] vfio_user.c:2231:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:13:46.805 [2024-05-14 02:11:01.373783] vfio_user.c:2031:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:13:46.805 [2024-05-14 02:11:01.381775] vfio_user.c:2031:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:13:47.063 passed 00:13:47.063 Test: admin_create_io_sq_verify_pc ...[2024-05-14 02:11:01.516801] vfio_user.c:2044:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:13:47.063 passed 00:13:48.437 Test: admin_create_io_qp_max_qps ...[2024-05-14 02:11:02.740784] nvme_ctrlr.c:5304:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:13:48.696 passed 00:13:48.955 Test: admin_create_io_sq_shared_cq ...[2024-05-14 02:11:03.354778] vfio_user.c:2310:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:48.955 passed 00:13:48.955 00:13:48.955 Run Summary: Type Total Ran Passed Failed Inactive 00:13:48.955 suites 1 1 n/a 0 0 00:13:48.955 tests 18 18 18 0 0 00:13:48.955 asserts 360 360 360 0 n/a 00:13:48.955 00:13:48.955 Elapsed time = 1.644 seconds 00:13:48.955 02:11:03 -- compliance/compliance.sh@42 -- # killprocess 69987 00:13:48.955 02:11:03 -- common/autotest_common.sh@926 -- # '[' -z 69987 ']' 00:13:48.955 02:11:03 -- common/autotest_common.sh@930 -- # kill -0 69987 00:13:48.955 02:11:03 -- common/autotest_common.sh@931 -- # uname 00:13:48.955 02:11:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:48.955 02:11:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69987 00:13:48.955 02:11:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:48.955 02:11:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:48.955 killing process with pid 69987 00:13:48.955 02:11:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69987' 00:13:48.955 02:11:03 -- common/autotest_common.sh@945 -- # kill 69987 00:13:48.955 02:11:03 -- common/autotest_common.sh@950 -- # wait 69987 00:13:49.212 02:11:03 -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:13:49.212 02:11:03 -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:13:49.212 00:13:49.212 real 0m6.653s 00:13:49.212 user 0m18.909s 00:13:49.212 sys 0m0.467s 00:13:49.212 02:11:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:49.212 02:11:03 -- common/autotest_common.sh@10 -- # set +x 00:13:49.212 ************************************ 00:13:49.212 END TEST nvmf_vfio_user_nvme_compliance 00:13:49.212 ************************************ 00:13:49.212 02:11:03 -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:49.212 02:11:03 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:49.212 02:11:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:49.212 02:11:03 -- common/autotest_common.sh@10 -- # set +x 00:13:49.212 ************************************ 00:13:49.213 START TEST nvmf_vfio_user_fuzz 00:13:49.213 ************************************ 00:13:49.213 02:11:03 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:49.213 * Looking for test storage... 00:13:49.213 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:49.213 02:11:03 -- target/vfio_user_fuzz.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:49.213 02:11:03 -- nvmf/common.sh@7 -- # uname -s 00:13:49.213 02:11:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:49.213 02:11:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:49.213 02:11:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:49.213 02:11:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:49.213 02:11:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:49.213 02:11:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:49.213 02:11:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:49.213 02:11:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:49.213 02:11:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:49.213 02:11:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:49.213 02:11:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:13:49.213 02:11:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:13:49.213 02:11:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:49.213 02:11:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:49.213 02:11:03 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:49.213 02:11:03 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:49.471 02:11:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:49.471 02:11:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:49.471 02:11:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:49.471 02:11:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.471 02:11:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.471 02:11:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.471 02:11:03 -- paths/export.sh@5 -- # export PATH 00:13:49.471 02:11:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.471 02:11:03 -- nvmf/common.sh@46 -- # : 0 00:13:49.471 02:11:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:49.471 02:11:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:49.471 02:11:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:49.471 02:11:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:49.471 02:11:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:49.471 02:11:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:49.471 02:11:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:49.471 02:11:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:49.471 02:11:03 -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:49.471 02:11:03 -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:49.471 02:11:03 -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:49.471 02:11:03 -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:13:49.471 02:11:03 -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:49.471 02:11:03 -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:49.471 02:11:03 -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:13:49.471 02:11:03 -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=70132 00:13:49.471 02:11:03 -- target/vfio_user_fuzz.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:49.471 02:11:03 -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 70132' 00:13:49.471 Process pid: 70132 00:13:49.471 02:11:03 -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:49.471 02:11:03 -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 70132 00:13:49.471 02:11:03 -- common/autotest_common.sh@819 -- # '[' -z 70132 ']' 00:13:49.471 02:11:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:49.471 02:11:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:49.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:49.471 02:11:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:49.471 02:11:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:49.471 02:11:03 -- common/autotest_common.sh@10 -- # set +x 00:13:50.405 02:11:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:50.405 02:11:04 -- common/autotest_common.sh@852 -- # return 0 00:13:50.405 02:11:04 -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:13:51.340 02:11:05 -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:51.340 02:11:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:51.340 02:11:05 -- common/autotest_common.sh@10 -- # set +x 00:13:51.340 02:11:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:51.340 02:11:05 -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:13:51.340 02:11:05 -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:51.340 02:11:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:51.340 02:11:05 -- common/autotest_common.sh@10 -- # set +x 00:13:51.340 malloc0 00:13:51.340 02:11:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:51.340 02:11:05 -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:13:51.340 02:11:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:51.340 02:11:05 -- common/autotest_common.sh@10 -- # set +x 00:13:51.599 02:11:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:51.599 02:11:05 -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:51.599 02:11:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:51.599 02:11:05 -- common/autotest_common.sh@10 -- # set +x 00:13:51.599 02:11:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:51.599 02:11:05 -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:51.599 02:11:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:51.599 02:11:05 -- common/autotest_common.sh@10 -- # set +x 00:13:51.599 02:11:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:51.599 02:11:05 -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:13:51.599 02:11:05 -- target/vfio_user_fuzz.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/vfio_user_fuzz -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:13:51.857 Shutting down the fuzz application 00:13:51.857 02:11:06 -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:13:51.857 02:11:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:51.857 02:11:06 -- common/autotest_common.sh@10 -- # set +x 00:13:51.857 02:11:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:51.857 02:11:06 -- target/vfio_user_fuzz.sh@46 -- # killprocess 70132 00:13:51.857 02:11:06 -- common/autotest_common.sh@926 -- # '[' -z 70132 ']' 00:13:51.857 02:11:06 -- common/autotest_common.sh@930 -- # kill -0 70132 00:13:51.857 02:11:06 -- common/autotest_common.sh@931 -- # uname 00:13:51.857 02:11:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:51.857 02:11:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 70132 00:13:51.857 02:11:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:51.857 02:11:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:51.857 killing process with pid 70132 00:13:51.857 02:11:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 70132' 00:13:51.857 02:11:06 -- common/autotest_common.sh@945 -- # kill 70132 00:13:51.857 02:11:06 -- common/autotest_common.sh@950 -- # wait 70132 00:13:52.116 02:11:06 -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /home/vagrant/spdk_repo/spdk/../output/vfio_user_fuzz_log.txt /home/vagrant/spdk_repo/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:13:52.116 02:11:06 -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:13:52.116 00:13:52.116 real 0m2.806s 00:13:52.116 user 0m3.196s 00:13:52.116 sys 0m0.303s 00:13:52.116 02:11:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:52.116 02:11:06 -- common/autotest_common.sh@10 -- # set +x 00:13:52.116 ************************************ 00:13:52.116 END TEST nvmf_vfio_user_fuzz 00:13:52.116 ************************************ 00:13:52.116 02:11:06 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:52.116 02:11:06 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:52.116 02:11:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:52.116 02:11:06 -- common/autotest_common.sh@10 -- # set +x 00:13:52.116 ************************************ 00:13:52.116 START TEST nvmf_host_management 00:13:52.116 ************************************ 00:13:52.116 02:11:06 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:52.116 * Looking for test storage... 00:13:52.116 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:52.116 02:11:06 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:52.116 02:11:06 -- nvmf/common.sh@7 -- # uname -s 00:13:52.116 02:11:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:52.116 02:11:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:52.116 02:11:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:52.116 02:11:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:52.116 02:11:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:52.116 02:11:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:52.116 02:11:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:52.116 02:11:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:52.116 02:11:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:52.116 02:11:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:52.116 02:11:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:13:52.116 02:11:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:13:52.116 02:11:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:52.116 02:11:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:52.116 02:11:06 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:52.116 02:11:06 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:52.116 02:11:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:52.116 02:11:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:52.116 02:11:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:52.116 02:11:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.116 02:11:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.116 02:11:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.116 02:11:06 -- paths/export.sh@5 -- # export PATH 00:13:52.116 02:11:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.116 02:11:06 -- nvmf/common.sh@46 -- # : 0 00:13:52.116 02:11:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:52.116 02:11:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:52.116 02:11:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:52.116 02:11:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:52.116 02:11:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:52.116 02:11:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:52.116 02:11:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:52.116 02:11:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:52.116 02:11:06 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:52.116 02:11:06 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:52.116 02:11:06 -- target/host_management.sh@104 -- # nvmftestinit 00:13:52.116 02:11:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:52.116 02:11:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:52.116 02:11:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:52.116 02:11:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:52.116 02:11:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:52.116 02:11:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:52.116 02:11:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:52.116 02:11:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:52.116 02:11:06 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:52.116 02:11:06 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:52.116 02:11:06 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:52.116 02:11:06 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:52.116 02:11:06 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:52.116 02:11:06 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:52.116 02:11:06 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:52.116 02:11:06 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:52.116 02:11:06 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:52.116 02:11:06 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:52.116 02:11:06 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:52.116 02:11:06 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:52.116 02:11:06 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:52.116 02:11:06 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:52.116 02:11:06 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:52.116 02:11:06 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:52.116 02:11:06 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:52.116 02:11:06 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:52.116 02:11:06 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:52.116 02:11:06 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:52.374 Cannot find device "nvmf_tgt_br" 00:13:52.374 02:11:06 -- nvmf/common.sh@154 -- # true 00:13:52.374 02:11:06 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:52.374 Cannot find device "nvmf_tgt_br2" 00:13:52.374 02:11:06 -- nvmf/common.sh@155 -- # true 00:13:52.374 02:11:06 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:52.374 02:11:06 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:52.374 Cannot find device "nvmf_tgt_br" 00:13:52.374 02:11:06 -- nvmf/common.sh@157 -- # true 00:13:52.374 02:11:06 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:52.374 Cannot find device "nvmf_tgt_br2" 00:13:52.374 02:11:06 -- nvmf/common.sh@158 -- # true 00:13:52.374 02:11:06 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:52.374 02:11:06 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:52.374 02:11:06 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:52.374 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:52.374 02:11:06 -- nvmf/common.sh@161 -- # true 00:13:52.374 02:11:06 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:52.374 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:52.374 02:11:06 -- nvmf/common.sh@162 -- # true 00:13:52.374 02:11:06 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:52.374 02:11:06 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:52.374 02:11:06 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:52.374 02:11:06 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:52.374 02:11:06 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:52.374 02:11:06 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:52.374 02:11:06 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:52.374 02:11:06 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:52.374 02:11:06 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:52.374 02:11:06 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:52.374 02:11:06 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:52.374 02:11:06 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:52.374 02:11:06 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:52.374 02:11:06 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:52.374 02:11:06 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:52.374 02:11:06 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:52.375 02:11:06 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:52.375 02:11:06 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:52.375 02:11:06 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:52.375 02:11:06 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:52.632 02:11:06 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:52.632 02:11:06 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:52.632 02:11:06 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:52.632 02:11:06 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:52.632 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:52.632 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:13:52.632 00:13:52.632 --- 10.0.0.2 ping statistics --- 00:13:52.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:52.632 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:13:52.632 02:11:06 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:52.632 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:52.632 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:13:52.632 00:13:52.632 --- 10.0.0.3 ping statistics --- 00:13:52.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:52.632 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:13:52.632 02:11:07 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:52.632 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:52.632 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:13:52.632 00:13:52.632 --- 10.0.0.1 ping statistics --- 00:13:52.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:52.632 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:13:52.632 02:11:07 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:52.632 02:11:07 -- nvmf/common.sh@421 -- # return 0 00:13:52.632 02:11:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:52.632 02:11:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:52.632 02:11:07 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:52.632 02:11:07 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:52.632 02:11:07 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:52.632 02:11:07 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:52.632 02:11:07 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:52.632 02:11:07 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:13:52.632 02:11:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:52.632 02:11:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:52.632 02:11:07 -- common/autotest_common.sh@10 -- # set +x 00:13:52.632 ************************************ 00:13:52.632 START TEST nvmf_host_management 00:13:52.632 ************************************ 00:13:52.632 02:11:07 -- common/autotest_common.sh@1104 -- # nvmf_host_management 00:13:52.632 02:11:07 -- target/host_management.sh@69 -- # starttarget 00:13:52.632 02:11:07 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:13:52.632 02:11:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:52.632 02:11:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:52.632 02:11:07 -- common/autotest_common.sh@10 -- # set +x 00:13:52.632 02:11:07 -- nvmf/common.sh@469 -- # nvmfpid=70357 00:13:52.632 02:11:07 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:13:52.632 02:11:07 -- nvmf/common.sh@470 -- # waitforlisten 70357 00:13:52.632 02:11:07 -- common/autotest_common.sh@819 -- # '[' -z 70357 ']' 00:13:52.632 02:11:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:52.632 02:11:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:52.632 02:11:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:52.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:52.632 02:11:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:52.632 02:11:07 -- common/autotest_common.sh@10 -- # set +x 00:13:52.632 [2024-05-14 02:11:07.099011] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:13:52.633 [2024-05-14 02:11:07.099107] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:52.890 [2024-05-14 02:11:07.241484] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:52.890 [2024-05-14 02:11:07.311976] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:52.890 [2024-05-14 02:11:07.312134] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:52.890 [2024-05-14 02:11:07.312150] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:52.890 [2024-05-14 02:11:07.312161] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:52.890 [2024-05-14 02:11:07.312547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:52.890 [2024-05-14 02:11:07.312685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:52.890 [2024-05-14 02:11:07.312798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:13:52.890 [2024-05-14 02:11:07.312803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:53.825 02:11:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:53.825 02:11:08 -- common/autotest_common.sh@852 -- # return 0 00:13:53.825 02:11:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:53.825 02:11:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:53.825 02:11:08 -- common/autotest_common.sh@10 -- # set +x 00:13:53.825 02:11:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:53.825 02:11:08 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:53.826 02:11:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:53.826 02:11:08 -- common/autotest_common.sh@10 -- # set +x 00:13:53.826 [2024-05-14 02:11:08.108946] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:53.826 02:11:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:53.826 02:11:08 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:13:53.826 02:11:08 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:53.826 02:11:08 -- common/autotest_common.sh@10 -- # set +x 00:13:53.826 02:11:08 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:13:53.826 02:11:08 -- target/host_management.sh@23 -- # cat 00:13:53.826 02:11:08 -- target/host_management.sh@30 -- # rpc_cmd 00:13:53.826 02:11:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:53.826 02:11:08 -- common/autotest_common.sh@10 -- # set +x 00:13:53.826 Malloc0 00:13:53.826 [2024-05-14 02:11:08.177555] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:53.826 02:11:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:53.826 02:11:08 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:13:53.826 02:11:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:53.826 02:11:08 -- common/autotest_common.sh@10 -- # set +x 00:13:53.826 02:11:08 -- target/host_management.sh@73 -- # perfpid=70429 00:13:53.826 02:11:08 -- target/host_management.sh@74 -- # waitforlisten 70429 /var/tmp/bdevperf.sock 00:13:53.826 02:11:08 -- common/autotest_common.sh@819 -- # '[' -z 70429 ']' 00:13:53.826 02:11:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:53.826 02:11:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:53.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:53.826 02:11:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:53.826 02:11:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:53.826 02:11:08 -- common/autotest_common.sh@10 -- # set +x 00:13:53.826 02:11:08 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:13:53.826 02:11:08 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:13:53.826 02:11:08 -- nvmf/common.sh@520 -- # config=() 00:13:53.826 02:11:08 -- nvmf/common.sh@520 -- # local subsystem config 00:13:53.826 02:11:08 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:13:53.826 02:11:08 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:13:53.826 { 00:13:53.826 "params": { 00:13:53.826 "name": "Nvme$subsystem", 00:13:53.826 "trtype": "$TEST_TRANSPORT", 00:13:53.826 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:53.826 "adrfam": "ipv4", 00:13:53.826 "trsvcid": "$NVMF_PORT", 00:13:53.826 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:53.826 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:53.826 "hdgst": ${hdgst:-false}, 00:13:53.826 "ddgst": ${ddgst:-false} 00:13:53.826 }, 00:13:53.826 "method": "bdev_nvme_attach_controller" 00:13:53.826 } 00:13:53.826 EOF 00:13:53.826 )") 00:13:53.826 02:11:08 -- nvmf/common.sh@542 -- # cat 00:13:53.826 02:11:08 -- nvmf/common.sh@544 -- # jq . 00:13:53.826 02:11:08 -- nvmf/common.sh@545 -- # IFS=, 00:13:53.826 02:11:08 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:13:53.826 "params": { 00:13:53.826 "name": "Nvme0", 00:13:53.826 "trtype": "tcp", 00:13:53.826 "traddr": "10.0.0.2", 00:13:53.826 "adrfam": "ipv4", 00:13:53.826 "trsvcid": "4420", 00:13:53.826 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:53.826 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:53.826 "hdgst": false, 00:13:53.826 "ddgst": false 00:13:53.826 }, 00:13:53.826 "method": "bdev_nvme_attach_controller" 00:13:53.826 }' 00:13:53.826 [2024-05-14 02:11:08.280372] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:13:53.826 [2024-05-14 02:11:08.280459] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70429 ] 00:13:54.084 [2024-05-14 02:11:08.417973] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.084 [2024-05-14 02:11:08.476715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.084 Running I/O for 10 seconds... 00:13:55.018 02:11:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:55.018 02:11:09 -- common/autotest_common.sh@852 -- # return 0 00:13:55.018 02:11:09 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:13:55.018 02:11:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:55.018 02:11:09 -- common/autotest_common.sh@10 -- # set +x 00:13:55.018 02:11:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:55.018 02:11:09 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:55.018 02:11:09 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:13:55.018 02:11:09 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:13:55.018 02:11:09 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:13:55.018 02:11:09 -- target/host_management.sh@52 -- # local ret=1 00:13:55.018 02:11:09 -- target/host_management.sh@53 -- # local i 00:13:55.018 02:11:09 -- target/host_management.sh@54 -- # (( i = 10 )) 00:13:55.018 02:11:09 -- target/host_management.sh@54 -- # (( i != 0 )) 00:13:55.018 02:11:09 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:13:55.018 02:11:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:55.018 02:11:09 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:13:55.018 02:11:09 -- common/autotest_common.sh@10 -- # set +x 00:13:55.018 02:11:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:55.018 02:11:09 -- target/host_management.sh@55 -- # read_io_count=1978 00:13:55.018 02:11:09 -- target/host_management.sh@58 -- # '[' 1978 -ge 100 ']' 00:13:55.018 02:11:09 -- target/host_management.sh@59 -- # ret=0 00:13:55.018 02:11:09 -- target/host_management.sh@60 -- # break 00:13:55.018 02:11:09 -- target/host_management.sh@64 -- # return 0 00:13:55.018 02:11:09 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:55.018 02:11:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:55.018 02:11:09 -- common/autotest_common.sh@10 -- # set +x 00:13:55.018 [2024-05-14 02:11:09.315159] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ef0b0 is same with the state(5) to be set 00:13:55.018 [2024-05-14 02:11:09.315221] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ef0b0 is same with the state(5) to be set 00:13:55.018 [2024-05-14 02:11:09.315235] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ef0b0 is same with the state(5) to be set 00:13:55.018 [2024-05-14 02:11:09.315244] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ef0b0 is same with the state(5) to be set 00:13:55.018 [2024-05-14 02:11:09.315253] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ef0b0 is same with the state(5) to be set 00:13:55.018 [2024-05-14 02:11:09.315262] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ef0b0 is same with the state(5) to be set 00:13:55.018 [2024-05-14 02:11:09.315271] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ef0b0 is same with the state(5) to be set 00:13:55.018 [2024-05-14 02:11:09.315279] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ef0b0 is same with the state(5) to be set 00:13:55.018 [2024-05-14 02:11:09.315288] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ef0b0 is same with the state(5) to be set 00:13:55.018 [2024-05-14 02:11:09.315297] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ef0b0 is same with the state(5) to be set 00:13:55.018 [2024-05-14 02:11:09.315305] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ef0b0 is same with the state(5) to be set 00:13:55.018 [2024-05-14 02:11:09.315314] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ef0b0 is same with the state(5) to be set 00:13:55.018 [2024-05-14 02:11:09.315323] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ef0b0 is same with the state(5) to be set 00:13:55.018 [2024-05-14 02:11:09.315331] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ef0b0 is same with the state(5) to be set 00:13:55.018 [2024-05-14 02:11:09.315340] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ef0b0 is same with the state(5) to be set 00:13:55.018 [2024-05-14 02:11:09.315348] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ef0b0 is same with the state(5) to be set 00:13:55.018 [2024-05-14 02:11:09.315357] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ef0b0 is same with the state(5) to be set 00:13:55.018 [2024-05-14 02:11:09.315366] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ef0b0 is same with the state(5) to be set 00:13:55.018 [2024-05-14 02:11:09.315375] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ef0b0 is same with the state(5) to be set 00:13:55.018 [2024-05-14 02:11:09.315383] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ef0b0 is same with the state(5) to be set 00:13:55.018 [2024-05-14 02:11:09.315392] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ef0b0 is same with the state(5) to be set 00:13:55.018 [2024-05-14 02:11:09.315401] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ef0b0 is same with the state(5) to be set 00:13:55.018 [2024-05-14 02:11:09.315409] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ef0b0 is same with the state(5) to be set 00:13:55.018 [2024-05-14 02:11:09.315418] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ef0b0 is same with the state(5) to be set 00:13:55.018 [2024-05-14 02:11:09.315427] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ef0b0 is same with the state(5) to be set 00:13:55.018 [2024-05-14 02:11:09.315435] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ef0b0 is same with the state(5) to be set 00:13:55.018 [2024-05-14 02:11:09.315759] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ef0b0 is same with the state(5) to be set 00:13:55.018 [2024-05-14 02:11:09.315789] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ef0b0 is same with the state(5) to be set 00:13:55.018 [2024-05-14 02:11:09.315798] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ef0b0 is same with the state(5) to be set 00:13:55.018 [2024-05-14 02:11:09.315807] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ef0b0 is same with the state(5) to be set 00:13:55.018 [2024-05-14 02:11:09.315817] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ef0b0 is same with the state(5) to be set 00:13:55.018 [2024-05-14 02:11:09.315826] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ef0b0 is same with the state(5) to be set 00:13:55.018 [2024-05-14 02:11:09.315835] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ef0b0 is same with the state(5) to be set 00:13:55.018 [2024-05-14 02:11:09.315859] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ef0b0 is same with the state(5) to be set 00:13:55.018 [2024-05-14 02:11:09.315870] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ef0b0 is same with the state(5) to be set 00:13:55.018 [2024-05-14 02:11:09.315879] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ef0b0 is same with the state(5) to be set 00:13:55.018 [2024-05-14 02:11:09.315888] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ef0b0 is same with the state(5) to be set 00:13:55.018 [2024-05-14 02:11:09.315897] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ef0b0 is same with the state(5) to be set 00:13:55.018 [2024-05-14 02:11:09.315906] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ef0b0 is same with the state(5) to be set 00:13:55.018 [2024-05-14 02:11:09.315915] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ef0b0 is same with the state(5) to be set 00:13:55.018 [2024-05-14 02:11:09.315924] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ef0b0 is same with the state(5) to be set 00:13:55.018 [2024-05-14 02:11:09.315932] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ef0b0 is same with the state(5) to be set 00:13:55.018 [2024-05-14 02:11:09.315941] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ef0b0 is same with the state(5) to be set 00:13:55.018 [2024-05-14 02:11:09.315950] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ef0b0 is same with the state(5) to be set 00:13:55.018 [2024-05-14 02:11:09.315959] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ef0b0 is same with the state(5) to be set 00:13:55.018 [2024-05-14 02:11:09.315977] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ef0b0 is same with the state(5) to be set 00:13:55.018 [2024-05-14 02:11:09.315986] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ef0b0 is same with the state(5) to be set 00:13:55.018 [2024-05-14 02:11:09.316199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.018 [2024-05-14 02:11:09.316230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.018 [2024-05-14 02:11:09.316274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.019 [2024-05-14 02:11:09.316292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.019 [2024-05-14 02:11:09.316306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.019 [2024-05-14 02:11:09.316317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.019 [2024-05-14 02:11:09.316329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.019 [2024-05-14 02:11:09.316339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.019 [2024-05-14 02:11:09.316350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.019 [2024-05-14 02:11:09.316359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.019 [2024-05-14 02:11:09.316371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.019 [2024-05-14 02:11:09.316380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.019 [2024-05-14 02:11:09.316392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.019 [2024-05-14 02:11:09.316401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.019 [2024-05-14 02:11:09.316412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.019 [2024-05-14 02:11:09.316421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.019 [2024-05-14 02:11:09.316433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.019 [2024-05-14 02:11:09.316442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.019 [2024-05-14 02:11:09.316453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.019 [2024-05-14 02:11:09.316462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.019 [2024-05-14 02:11:09.316474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.019 [2024-05-14 02:11:09.316496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.019 [2024-05-14 02:11:09.316508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.019 [2024-05-14 02:11:09.316518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.019 [2024-05-14 02:11:09.316529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.019 [2024-05-14 02:11:09.316538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.019 [2024-05-14 02:11:09.316549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.019 [2024-05-14 02:11:09.316559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.019 [2024-05-14 02:11:09.316570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:7296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.019 [2024-05-14 02:11:09.316579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.019 [2024-05-14 02:11:09.316591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:7552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.019 [2024-05-14 02:11:09.316601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.019 [2024-05-14 02:11:09.316613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:7680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.019 [2024-05-14 02:11:09.316622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.019 [2024-05-14 02:11:09.316633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:7936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.019 [2024-05-14 02:11:09.316642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.019 [2024-05-14 02:11:09.316654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.019 [2024-05-14 02:11:09.316663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.019 [2024-05-14 02:11:09.316674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.019 [2024-05-14 02:11:09.316684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.019 [2024-05-14 02:11:09.316695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.019 [2024-05-14 02:11:09.316704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.019 [2024-05-14 02:11:09.316715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.019 [2024-05-14 02:11:09.316725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.019 [2024-05-14 02:11:09.316736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.019 [2024-05-14 02:11:09.316745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.019 [2024-05-14 02:11:09.316756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.019 [2024-05-14 02:11:09.316780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.019 [2024-05-14 02:11:09.316793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.019 [2024-05-14 02:11:09.316802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.019 [2024-05-14 02:11:09.316813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.019 [2024-05-14 02:11:09.316823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.019 [2024-05-14 02:11:09.316834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.019 [2024-05-14 02:11:09.316845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.019 [2024-05-14 02:11:09.316857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.019 [2024-05-14 02:11:09.316867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.019 [2024-05-14 02:11:09.316878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.019 [2024-05-14 02:11:09.316887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.019 [2024-05-14 02:11:09.316899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.019 [2024-05-14 02:11:09.316909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.019 [2024-05-14 02:11:09.316920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.019 [2024-05-14 02:11:09.316929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.019 [2024-05-14 02:11:09.316940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.019 [2024-05-14 02:11:09.316949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.019 [2024-05-14 02:11:09.316961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.019 [2024-05-14 02:11:09.316971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.019 [2024-05-14 02:11:09.316982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.019 [2024-05-14 02:11:09.316991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.019 [2024-05-14 02:11:09.317014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.019 [2024-05-14 02:11:09.317024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.019 [2024-05-14 02:11:09.317035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.019 [2024-05-14 02:11:09.317044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.019 [2024-05-14 02:11:09.317056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.019 [2024-05-14 02:11:09.317065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.019 [2024-05-14 02:11:09.317076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.019 [2024-05-14 02:11:09.317086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.019 [2024-05-14 02:11:09.317097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.019 [2024-05-14 02:11:09.317106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.019 [2024-05-14 02:11:09.317117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.019 [2024-05-14 02:11:09.317127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.019 [2024-05-14 02:11:09.317139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.019 [2024-05-14 02:11:09.317148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.019 [2024-05-14 02:11:09.317159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.019 [2024-05-14 02:11:09.317170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.020 [2024-05-14 02:11:09.317182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.020 [2024-05-14 02:11:09.317193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.020 [2024-05-14 02:11:09.317205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.020 [2024-05-14 02:11:09.317214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.020 [2024-05-14 02:11:09.317226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.020 [2024-05-14 02:11:09.317235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.020 [2024-05-14 02:11:09.317246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.020 [2024-05-14 02:11:09.317255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.020 [2024-05-14 02:11:09.317267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.020 [2024-05-14 02:11:09.317276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.020 [2024-05-14 02:11:09.317287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.020 [2024-05-14 02:11:09.317297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.020 [2024-05-14 02:11:09.317308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.020 [2024-05-14 02:11:09.317318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.020 [2024-05-14 02:11:09.317329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.020 [2024-05-14 02:11:09.317338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.020 [2024-05-14 02:11:09.317350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.020 [2024-05-14 02:11:09.317359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.020 [2024-05-14 02:11:09.317370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.020 [2024-05-14 02:11:09.317379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.020 [2024-05-14 02:11:09.317391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.020 [2024-05-14 02:11:09.317400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.020 [2024-05-14 02:11:09.317412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.020 [2024-05-14 02:11:09.317421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.020 [2024-05-14 02:11:09.317433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.020 [2024-05-14 02:11:09.317442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.020 [2024-05-14 02:11:09.317454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.020 [2024-05-14 02:11:09.317463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.020 [2024-05-14 02:11:09.317474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.020 [2024-05-14 02:11:09.317483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.020 [2024-05-14 02:11:09.317494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.020 [2024-05-14 02:11:09.317504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.020 [2024-05-14 02:11:09.317525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.020 [2024-05-14 02:11:09.317537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.020 [2024-05-14 02:11:09.317548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.020 [2024-05-14 02:11:09.317557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.020 [2024-05-14 02:11:09.317569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.020 [2024-05-14 02:11:09.317578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.020 [2024-05-14 02:11:09.317589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.020 [2024-05-14 02:11:09.317609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.020 [2024-05-14 02:11:09.317622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.020 [2024-05-14 02:11:09.317631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.020 [2024-05-14 02:11:09.317642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.020 [2024-05-14 02:11:09.317652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.020 [2024-05-14 02:11:09.317663] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130f7d0 is same with the state(5) to be set 00:13:55.020 [2024-05-14 02:11:09.317712] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x130f7d0 was disconnected and freed. reset controller. 00:13:55.020 [2024-05-14 02:11:09.318892] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:13:55.020 02:11:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:55.020 02:11:09 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:55.020 task offset: 12416 on job bdev=Nvme0n1 fails 00:13:55.020 00:13:55.020 Latency(us) 00:13:55.020 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:55.020 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:55.020 Job: Nvme0n1 ended in about 0.71 seconds with error 00:13:55.020 Verification LBA range: start 0x0 length 0x400 00:13:55.020 Nvme0n1 : 0.71 3006.04 187.88 90.45 0.00 20328.93 3321.48 27405.96 00:13:55.020 =================================================================================================================== 00:13:55.020 Total : 3006.04 187.88 90.45 0.00 20328.93 3321.48 27405.96 00:13:55.020 02:11:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:55.020 02:11:09 -- common/autotest_common.sh@10 -- # set +x 00:13:55.020 [2024-05-14 02:11:09.320999] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:55.020 [2024-05-14 02:11:09.321024] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x130f170 (9): Bad file descriptor 00:13:55.020 02:11:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:55.020 02:11:09 -- target/host_management.sh@87 -- # sleep 1 00:13:55.020 [2024-05-14 02:11:09.330016] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:55.953 02:11:10 -- target/host_management.sh@91 -- # kill -9 70429 00:13:55.953 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (70429) - No such process 00:13:55.953 02:11:10 -- target/host_management.sh@91 -- # true 00:13:55.953 02:11:10 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:13:55.953 02:11:10 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:13:55.953 02:11:10 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:13:55.953 02:11:10 -- nvmf/common.sh@520 -- # config=() 00:13:55.953 02:11:10 -- nvmf/common.sh@520 -- # local subsystem config 00:13:55.953 02:11:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:13:55.953 02:11:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:13:55.953 { 00:13:55.953 "params": { 00:13:55.954 "name": "Nvme$subsystem", 00:13:55.954 "trtype": "$TEST_TRANSPORT", 00:13:55.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:55.954 "adrfam": "ipv4", 00:13:55.954 "trsvcid": "$NVMF_PORT", 00:13:55.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:55.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:55.954 "hdgst": ${hdgst:-false}, 00:13:55.954 "ddgst": ${ddgst:-false} 00:13:55.954 }, 00:13:55.954 "method": "bdev_nvme_attach_controller" 00:13:55.954 } 00:13:55.954 EOF 00:13:55.954 )") 00:13:55.954 02:11:10 -- nvmf/common.sh@542 -- # cat 00:13:55.954 02:11:10 -- nvmf/common.sh@544 -- # jq . 00:13:55.954 02:11:10 -- nvmf/common.sh@545 -- # IFS=, 00:13:55.954 02:11:10 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:13:55.954 "params": { 00:13:55.954 "name": "Nvme0", 00:13:55.954 "trtype": "tcp", 00:13:55.954 "traddr": "10.0.0.2", 00:13:55.954 "adrfam": "ipv4", 00:13:55.954 "trsvcid": "4420", 00:13:55.954 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:55.954 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:55.954 "hdgst": false, 00:13:55.954 "ddgst": false 00:13:55.954 }, 00:13:55.954 "method": "bdev_nvme_attach_controller" 00:13:55.954 }' 00:13:55.954 [2024-05-14 02:11:10.389667] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:13:55.954 [2024-05-14 02:11:10.389755] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70484 ] 00:13:55.954 [2024-05-14 02:11:10.531350] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.210 [2024-05-14 02:11:10.600397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:56.210 Running I/O for 1 seconds... 00:13:57.585 00:13:57.585 Latency(us) 00:13:57.585 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:57.585 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:57.585 Verification LBA range: start 0x0 length 0x400 00:13:57.585 Nvme0n1 : 1.01 3252.26 203.27 0.00 0.00 19328.74 863.88 28597.53 00:13:57.585 =================================================================================================================== 00:13:57.585 Total : 3252.26 203.27 0.00 0.00 19328.74 863.88 28597.53 00:13:57.585 02:11:11 -- target/host_management.sh@101 -- # stoptarget 00:13:57.585 02:11:11 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:13:57.585 02:11:11 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:13:57.585 02:11:11 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:13:57.585 02:11:11 -- target/host_management.sh@40 -- # nvmftestfini 00:13:57.585 02:11:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:57.585 02:11:11 -- nvmf/common.sh@116 -- # sync 00:13:57.585 02:11:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:57.585 02:11:11 -- nvmf/common.sh@119 -- # set +e 00:13:57.585 02:11:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:57.585 02:11:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:57.585 rmmod nvme_tcp 00:13:57.585 rmmod nvme_fabrics 00:13:57.585 rmmod nvme_keyring 00:13:57.585 02:11:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:57.585 02:11:12 -- nvmf/common.sh@123 -- # set -e 00:13:57.585 02:11:12 -- nvmf/common.sh@124 -- # return 0 00:13:57.585 02:11:12 -- nvmf/common.sh@477 -- # '[' -n 70357 ']' 00:13:57.585 02:11:12 -- nvmf/common.sh@478 -- # killprocess 70357 00:13:57.585 02:11:12 -- common/autotest_common.sh@926 -- # '[' -z 70357 ']' 00:13:57.585 02:11:12 -- common/autotest_common.sh@930 -- # kill -0 70357 00:13:57.586 02:11:12 -- common/autotest_common.sh@931 -- # uname 00:13:57.586 02:11:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:57.586 02:11:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 70357 00:13:57.586 02:11:12 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:13:57.586 02:11:12 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:13:57.586 killing process with pid 70357 00:13:57.586 02:11:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 70357' 00:13:57.586 02:11:12 -- common/autotest_common.sh@945 -- # kill 70357 00:13:57.586 02:11:12 -- common/autotest_common.sh@950 -- # wait 70357 00:13:57.844 [2024-05-14 02:11:12.236742] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:13:57.844 02:11:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:57.844 02:11:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:57.844 02:11:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:57.844 02:11:12 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:57.844 02:11:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:57.844 02:11:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:57.844 02:11:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:57.844 02:11:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:57.844 02:11:12 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:57.844 ************************************ 00:13:57.844 END TEST nvmf_host_management 00:13:57.844 ************************************ 00:13:57.844 00:13:57.844 real 0m5.264s 00:13:57.844 user 0m22.090s 00:13:57.844 sys 0m1.134s 00:13:57.844 02:11:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:57.844 02:11:12 -- common/autotest_common.sh@10 -- # set +x 00:13:57.844 02:11:12 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:13:57.844 ************************************ 00:13:57.844 END TEST nvmf_host_management 00:13:57.844 ************************************ 00:13:57.844 00:13:57.844 real 0m5.774s 00:13:57.844 user 0m22.196s 00:13:57.844 sys 0m1.392s 00:13:57.844 02:11:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:57.844 02:11:12 -- common/autotest_common.sh@10 -- # set +x 00:13:57.844 02:11:12 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:57.844 02:11:12 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:57.844 02:11:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:57.844 02:11:12 -- common/autotest_common.sh@10 -- # set +x 00:13:57.844 ************************************ 00:13:57.844 START TEST nvmf_lvol 00:13:57.844 ************************************ 00:13:57.844 02:11:12 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:58.104 * Looking for test storage... 00:13:58.104 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:58.104 02:11:12 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:58.104 02:11:12 -- nvmf/common.sh@7 -- # uname -s 00:13:58.104 02:11:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:58.104 02:11:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:58.104 02:11:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:58.104 02:11:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:58.104 02:11:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:58.104 02:11:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:58.104 02:11:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:58.104 02:11:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:58.104 02:11:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:58.104 02:11:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:58.104 02:11:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:13:58.104 02:11:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:13:58.104 02:11:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:58.104 02:11:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:58.104 02:11:12 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:58.104 02:11:12 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:58.104 02:11:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:58.104 02:11:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:58.104 02:11:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:58.104 02:11:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.104 02:11:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.104 02:11:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.104 02:11:12 -- paths/export.sh@5 -- # export PATH 00:13:58.104 02:11:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.104 02:11:12 -- nvmf/common.sh@46 -- # : 0 00:13:58.104 02:11:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:58.104 02:11:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:58.104 02:11:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:58.104 02:11:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:58.104 02:11:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:58.104 02:11:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:58.104 02:11:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:58.104 02:11:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:58.104 02:11:12 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:58.104 02:11:12 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:58.104 02:11:12 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:13:58.104 02:11:12 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:13:58.104 02:11:12 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:58.104 02:11:12 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:13:58.104 02:11:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:58.104 02:11:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:58.104 02:11:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:58.104 02:11:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:58.104 02:11:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:58.104 02:11:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:58.104 02:11:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:58.104 02:11:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:58.104 02:11:12 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:58.104 02:11:12 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:58.104 02:11:12 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:58.104 02:11:12 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:58.104 02:11:12 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:58.104 02:11:12 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:58.104 02:11:12 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:58.104 02:11:12 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:58.104 02:11:12 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:58.104 02:11:12 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:58.104 02:11:12 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:58.104 02:11:12 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:58.104 02:11:12 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:58.104 02:11:12 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:58.104 02:11:12 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:58.104 02:11:12 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:58.104 02:11:12 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:58.104 02:11:12 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:58.104 02:11:12 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:58.104 02:11:12 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:58.104 Cannot find device "nvmf_tgt_br" 00:13:58.104 02:11:12 -- nvmf/common.sh@154 -- # true 00:13:58.104 02:11:12 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:58.104 Cannot find device "nvmf_tgt_br2" 00:13:58.104 02:11:12 -- nvmf/common.sh@155 -- # true 00:13:58.104 02:11:12 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:58.104 02:11:12 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:58.104 Cannot find device "nvmf_tgt_br" 00:13:58.104 02:11:12 -- nvmf/common.sh@157 -- # true 00:13:58.104 02:11:12 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:58.104 Cannot find device "nvmf_tgt_br2" 00:13:58.104 02:11:12 -- nvmf/common.sh@158 -- # true 00:13:58.104 02:11:12 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:58.104 02:11:12 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:58.104 02:11:12 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:58.104 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:58.104 02:11:12 -- nvmf/common.sh@161 -- # true 00:13:58.104 02:11:12 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:58.104 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:58.104 02:11:12 -- nvmf/common.sh@162 -- # true 00:13:58.104 02:11:12 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:58.104 02:11:12 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:58.104 02:11:12 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:58.104 02:11:12 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:58.104 02:11:12 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:58.104 02:11:12 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:58.364 02:11:12 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:58.364 02:11:12 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:58.364 02:11:12 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:58.364 02:11:12 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:58.364 02:11:12 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:58.364 02:11:12 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:58.364 02:11:12 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:58.364 02:11:12 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:58.364 02:11:12 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:58.364 02:11:12 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:58.364 02:11:12 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:58.364 02:11:12 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:58.364 02:11:12 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:58.364 02:11:12 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:58.364 02:11:12 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:58.364 02:11:12 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:58.364 02:11:12 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:58.364 02:11:12 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:58.364 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:58.365 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:13:58.365 00:13:58.365 --- 10.0.0.2 ping statistics --- 00:13:58.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:58.365 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:13:58.365 02:11:12 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:58.365 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:58.365 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:13:58.365 00:13:58.365 --- 10.0.0.3 ping statistics --- 00:13:58.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:58.365 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:13:58.365 02:11:12 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:58.365 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:58.365 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:13:58.365 00:13:58.365 --- 10.0.0.1 ping statistics --- 00:13:58.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:58.365 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:13:58.365 02:11:12 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:58.365 02:11:12 -- nvmf/common.sh@421 -- # return 0 00:13:58.365 02:11:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:58.365 02:11:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:58.365 02:11:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:58.365 02:11:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:58.365 02:11:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:58.365 02:11:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:58.365 02:11:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:58.365 02:11:12 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:13:58.365 02:11:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:58.365 02:11:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:58.365 02:11:12 -- common/autotest_common.sh@10 -- # set +x 00:13:58.365 02:11:12 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:58.365 02:11:12 -- nvmf/common.sh@469 -- # nvmfpid=70710 00:13:58.365 02:11:12 -- nvmf/common.sh@470 -- # waitforlisten 70710 00:13:58.365 02:11:12 -- common/autotest_common.sh@819 -- # '[' -z 70710 ']' 00:13:58.365 02:11:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:58.365 02:11:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:58.365 02:11:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:58.365 02:11:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:58.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:58.365 02:11:12 -- common/autotest_common.sh@10 -- # set +x 00:13:58.365 [2024-05-14 02:11:12.931545] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:13:58.365 [2024-05-14 02:11:12.931625] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:58.629 [2024-05-14 02:11:13.068435] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:58.629 [2024-05-14 02:11:13.136504] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:58.629 [2024-05-14 02:11:13.136675] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:58.629 [2024-05-14 02:11:13.136691] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:58.629 [2024-05-14 02:11:13.136702] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:58.629 [2024-05-14 02:11:13.136832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:58.629 [2024-05-14 02:11:13.137245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:58.629 [2024-05-14 02:11:13.137258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.578 02:11:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:59.578 02:11:13 -- common/autotest_common.sh@852 -- # return 0 00:13:59.578 02:11:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:59.578 02:11:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:59.578 02:11:13 -- common/autotest_common.sh@10 -- # set +x 00:13:59.578 02:11:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:59.578 02:11:14 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:59.837 [2024-05-14 02:11:14.268549] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:59.837 02:11:14 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:00.095 02:11:14 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:14:00.095 02:11:14 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:00.353 02:11:14 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:14:00.353 02:11:14 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:14:00.611 02:11:15 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:14:00.869 02:11:15 -- target/nvmf_lvol.sh@29 -- # lvs=40a4ba99-ee24-446f-9bee-8f9184a9efa2 00:14:00.869 02:11:15 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 40a4ba99-ee24-446f-9bee-8f9184a9efa2 lvol 20 00:14:01.435 02:11:15 -- target/nvmf_lvol.sh@32 -- # lvol=92db7a7a-5b98-4832-8683-51d8514c1502 00:14:01.435 02:11:15 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:01.435 02:11:16 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 92db7a7a-5b98-4832-8683-51d8514c1502 00:14:02.000 02:11:16 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:02.000 [2024-05-14 02:11:16.551345] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:02.000 02:11:16 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:02.258 02:11:16 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:14:02.258 02:11:16 -- target/nvmf_lvol.sh@42 -- # perf_pid=70859 00:14:02.258 02:11:16 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:14:03.635 02:11:17 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 92db7a7a-5b98-4832-8683-51d8514c1502 MY_SNAPSHOT 00:14:03.635 02:11:18 -- target/nvmf_lvol.sh@47 -- # snapshot=4e4e7a17-2cad-422e-b0e1-23736f7e2afd 00:14:03.635 02:11:18 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 92db7a7a-5b98-4832-8683-51d8514c1502 30 00:14:04.220 02:11:18 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 4e4e7a17-2cad-422e-b0e1-23736f7e2afd MY_CLONE 00:14:04.220 02:11:18 -- target/nvmf_lvol.sh@49 -- # clone=c4f2f6fe-5e48-40da-9f24-5dd94a27c6ec 00:14:04.220 02:11:18 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate c4f2f6fe-5e48-40da-9f24-5dd94a27c6ec 00:14:05.187 02:11:19 -- target/nvmf_lvol.sh@53 -- # wait 70859 00:14:13.304 Initializing NVMe Controllers 00:14:13.304 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:13.304 Controller IO queue size 128, less than required. 00:14:13.304 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:13.304 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:13.304 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:13.304 Initialization complete. Launching workers. 00:14:13.304 ======================================================== 00:14:13.304 Latency(us) 00:14:13.304 Device Information : IOPS MiB/s Average min max 00:14:13.304 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10526.00 41.12 12163.57 1995.67 61711.16 00:14:13.304 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10609.10 41.44 12070.02 501.90 83512.56 00:14:13.304 ======================================================== 00:14:13.304 Total : 21135.10 82.56 12116.61 501.90 83512.56 00:14:13.304 00:14:13.304 02:11:27 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:13.304 02:11:27 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 92db7a7a-5b98-4832-8683-51d8514c1502 00:14:13.304 02:11:27 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 40a4ba99-ee24-446f-9bee-8f9184a9efa2 00:14:13.562 02:11:27 -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:13.562 02:11:27 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:13.562 02:11:27 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:13.562 02:11:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:13.562 02:11:27 -- nvmf/common.sh@116 -- # sync 00:14:13.562 02:11:27 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:13.562 02:11:27 -- nvmf/common.sh@119 -- # set +e 00:14:13.562 02:11:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:13.562 02:11:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:13.562 rmmod nvme_tcp 00:14:13.562 rmmod nvme_fabrics 00:14:13.562 rmmod nvme_keyring 00:14:13.562 02:11:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:13.562 02:11:28 -- nvmf/common.sh@123 -- # set -e 00:14:13.562 02:11:28 -- nvmf/common.sh@124 -- # return 0 00:14:13.562 02:11:28 -- nvmf/common.sh@477 -- # '[' -n 70710 ']' 00:14:13.562 02:11:28 -- nvmf/common.sh@478 -- # killprocess 70710 00:14:13.562 02:11:28 -- common/autotest_common.sh@926 -- # '[' -z 70710 ']' 00:14:13.562 02:11:28 -- common/autotest_common.sh@930 -- # kill -0 70710 00:14:13.562 02:11:28 -- common/autotest_common.sh@931 -- # uname 00:14:13.562 02:11:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:13.562 02:11:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 70710 00:14:13.562 02:11:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:13.562 killing process with pid 70710 00:14:13.562 02:11:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:13.562 02:11:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 70710' 00:14:13.562 02:11:28 -- common/autotest_common.sh@945 -- # kill 70710 00:14:13.562 02:11:28 -- common/autotest_common.sh@950 -- # wait 70710 00:14:13.821 02:11:28 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:13.821 02:11:28 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:13.821 02:11:28 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:13.821 02:11:28 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:13.821 02:11:28 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:13.821 02:11:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:13.821 02:11:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:13.821 02:11:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:13.821 02:11:28 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:13.821 00:14:13.821 real 0m15.901s 00:14:13.821 user 1m6.636s 00:14:13.821 sys 0m3.760s 00:14:13.821 02:11:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:13.821 02:11:28 -- common/autotest_common.sh@10 -- # set +x 00:14:13.821 ************************************ 00:14:13.821 END TEST nvmf_lvol 00:14:13.821 ************************************ 00:14:13.821 02:11:28 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:13.821 02:11:28 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:13.821 02:11:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:13.821 02:11:28 -- common/autotest_common.sh@10 -- # set +x 00:14:13.821 ************************************ 00:14:13.821 START TEST nvmf_lvs_grow 00:14:13.821 ************************************ 00:14:13.821 02:11:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:14.079 * Looking for test storage... 00:14:14.079 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:14.079 02:11:28 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:14.079 02:11:28 -- nvmf/common.sh@7 -- # uname -s 00:14:14.079 02:11:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:14.079 02:11:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:14.079 02:11:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:14.079 02:11:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:14.079 02:11:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:14.079 02:11:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:14.079 02:11:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:14.079 02:11:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:14.079 02:11:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:14.079 02:11:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:14.079 02:11:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:14:14.079 02:11:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:14:14.079 02:11:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:14.079 02:11:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:14.079 02:11:28 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:14.079 02:11:28 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:14.079 02:11:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:14.079 02:11:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:14.079 02:11:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:14.080 02:11:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.080 02:11:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.080 02:11:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.080 02:11:28 -- paths/export.sh@5 -- # export PATH 00:14:14.080 02:11:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.080 02:11:28 -- nvmf/common.sh@46 -- # : 0 00:14:14.080 02:11:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:14.080 02:11:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:14.080 02:11:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:14.080 02:11:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:14.080 02:11:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:14.080 02:11:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:14.080 02:11:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:14.080 02:11:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:14.080 02:11:28 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:14.080 02:11:28 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:14.080 02:11:28 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:14:14.080 02:11:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:14.080 02:11:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:14.080 02:11:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:14.080 02:11:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:14.080 02:11:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:14.080 02:11:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:14.080 02:11:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:14.080 02:11:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:14.080 02:11:28 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:14.080 02:11:28 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:14.080 02:11:28 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:14.080 02:11:28 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:14.080 02:11:28 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:14.080 02:11:28 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:14.080 02:11:28 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:14.080 02:11:28 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:14.080 02:11:28 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:14.080 02:11:28 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:14.080 02:11:28 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:14.080 02:11:28 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:14.080 02:11:28 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:14.080 02:11:28 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:14.080 02:11:28 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:14.080 02:11:28 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:14.080 02:11:28 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:14.080 02:11:28 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:14.080 02:11:28 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:14.080 02:11:28 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:14.080 Cannot find device "nvmf_tgt_br" 00:14:14.080 02:11:28 -- nvmf/common.sh@154 -- # true 00:14:14.080 02:11:28 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:14.080 Cannot find device "nvmf_tgt_br2" 00:14:14.080 02:11:28 -- nvmf/common.sh@155 -- # true 00:14:14.080 02:11:28 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:14.080 02:11:28 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:14.080 Cannot find device "nvmf_tgt_br" 00:14:14.080 02:11:28 -- nvmf/common.sh@157 -- # true 00:14:14.080 02:11:28 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:14.080 Cannot find device "nvmf_tgt_br2" 00:14:14.080 02:11:28 -- nvmf/common.sh@158 -- # true 00:14:14.080 02:11:28 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:14.080 02:11:28 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:14.080 02:11:28 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:14.080 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:14.080 02:11:28 -- nvmf/common.sh@161 -- # true 00:14:14.080 02:11:28 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:14.080 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:14.080 02:11:28 -- nvmf/common.sh@162 -- # true 00:14:14.080 02:11:28 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:14.080 02:11:28 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:14.080 02:11:28 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:14.080 02:11:28 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:14.080 02:11:28 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:14.080 02:11:28 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:14.080 02:11:28 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:14.080 02:11:28 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:14.080 02:11:28 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:14.080 02:11:28 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:14.080 02:11:28 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:14.080 02:11:28 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:14.339 02:11:28 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:14.339 02:11:28 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:14.339 02:11:28 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:14.339 02:11:28 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:14.339 02:11:28 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:14.339 02:11:28 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:14.339 02:11:28 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:14.339 02:11:28 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:14.339 02:11:28 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:14.339 02:11:28 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:14.339 02:11:28 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:14.339 02:11:28 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:14.339 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:14.339 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:14:14.339 00:14:14.339 --- 10.0.0.2 ping statistics --- 00:14:14.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:14.339 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:14:14.339 02:11:28 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:14.339 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:14.339 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:14:14.339 00:14:14.339 --- 10.0.0.3 ping statistics --- 00:14:14.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:14.339 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:14:14.339 02:11:28 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:14.339 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:14.339 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:14:14.339 00:14:14.339 --- 10.0.0.1 ping statistics --- 00:14:14.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:14.339 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:14:14.339 02:11:28 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:14.339 02:11:28 -- nvmf/common.sh@421 -- # return 0 00:14:14.339 02:11:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:14.339 02:11:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:14.339 02:11:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:14.339 02:11:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:14.339 02:11:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:14.339 02:11:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:14.339 02:11:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:14.339 02:11:28 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:14:14.339 02:11:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:14.339 02:11:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:14.339 02:11:28 -- common/autotest_common.sh@10 -- # set +x 00:14:14.339 02:11:28 -- nvmf/common.sh@469 -- # nvmfpid=71218 00:14:14.339 02:11:28 -- nvmf/common.sh@470 -- # waitforlisten 71218 00:14:14.339 02:11:28 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:14.339 02:11:28 -- common/autotest_common.sh@819 -- # '[' -z 71218 ']' 00:14:14.339 02:11:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:14.339 02:11:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:14.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:14.339 02:11:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:14.339 02:11:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:14.339 02:11:28 -- common/autotest_common.sh@10 -- # set +x 00:14:14.339 [2024-05-14 02:11:28.866904] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:14:14.339 [2024-05-14 02:11:28.866984] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:14.598 [2024-05-14 02:11:29.001388] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:14.598 [2024-05-14 02:11:29.067717] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:14.598 [2024-05-14 02:11:29.067905] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:14.598 [2024-05-14 02:11:29.067923] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:14.598 [2024-05-14 02:11:29.067932] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:14.598 [2024-05-14 02:11:29.067964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:15.533 02:11:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:15.533 02:11:29 -- common/autotest_common.sh@852 -- # return 0 00:14:15.533 02:11:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:15.533 02:11:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:15.533 02:11:29 -- common/autotest_common.sh@10 -- # set +x 00:14:15.533 02:11:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:15.533 02:11:29 -- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:15.792 [2024-05-14 02:11:30.171552] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:15.792 02:11:30 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:14:15.792 02:11:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:14:15.792 02:11:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:15.792 02:11:30 -- common/autotest_common.sh@10 -- # set +x 00:14:15.792 ************************************ 00:14:15.792 START TEST lvs_grow_clean 00:14:15.792 ************************************ 00:14:15.792 02:11:30 -- common/autotest_common.sh@1104 -- # lvs_grow 00:14:15.792 02:11:30 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:15.792 02:11:30 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:15.792 02:11:30 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:15.792 02:11:30 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:15.792 02:11:30 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:15.792 02:11:30 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:15.792 02:11:30 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:15.792 02:11:30 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:15.792 02:11:30 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:16.051 02:11:30 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:16.051 02:11:30 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:16.309 02:11:30 -- target/nvmf_lvs_grow.sh@28 -- # lvs=618bbd99-a084-4c98-bda9-9475ca49139e 00:14:16.309 02:11:30 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 618bbd99-a084-4c98-bda9-9475ca49139e 00:14:16.309 02:11:30 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:16.568 02:11:31 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:16.568 02:11:31 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:16.568 02:11:31 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 618bbd99-a084-4c98-bda9-9475ca49139e lvol 150 00:14:16.826 02:11:31 -- target/nvmf_lvs_grow.sh@33 -- # lvol=2c6067e4-9cc3-4568-a202-d0b16e3e7569 00:14:16.826 02:11:31 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:16.826 02:11:31 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:17.083 [2024-05-14 02:11:31.502518] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:17.083 [2024-05-14 02:11:31.502593] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:17.083 true 00:14:17.083 02:11:31 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:17.083 02:11:31 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 618bbd99-a084-4c98-bda9-9475ca49139e 00:14:17.342 02:11:31 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:17.342 02:11:31 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:17.600 02:11:32 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2c6067e4-9cc3-4568-a202-d0b16e3e7569 00:14:17.858 02:11:32 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:18.116 [2024-05-14 02:11:32.479111] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:18.116 02:11:32 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:18.375 02:11:32 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=71385 00:14:18.375 02:11:32 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:18.375 02:11:32 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 71385 /var/tmp/bdevperf.sock 00:14:18.375 02:11:32 -- common/autotest_common.sh@819 -- # '[' -z 71385 ']' 00:14:18.375 02:11:32 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:18.375 02:11:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:18.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:18.375 02:11:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:18.375 02:11:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:18.375 02:11:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:18.375 02:11:32 -- common/autotest_common.sh@10 -- # set +x 00:14:18.375 [2024-05-14 02:11:32.812470] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:14:18.375 [2024-05-14 02:11:32.812566] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71385 ] 00:14:18.375 [2024-05-14 02:11:32.953104] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.634 [2024-05-14 02:11:33.019076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:19.199 02:11:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:19.199 02:11:33 -- common/autotest_common.sh@852 -- # return 0 00:14:19.199 02:11:33 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:19.763 Nvme0n1 00:14:19.763 02:11:34 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:19.763 [ 00:14:19.763 { 00:14:19.763 "aliases": [ 00:14:19.763 "2c6067e4-9cc3-4568-a202-d0b16e3e7569" 00:14:19.763 ], 00:14:19.763 "assigned_rate_limits": { 00:14:19.763 "r_mbytes_per_sec": 0, 00:14:19.763 "rw_ios_per_sec": 0, 00:14:19.763 "rw_mbytes_per_sec": 0, 00:14:19.763 "w_mbytes_per_sec": 0 00:14:19.763 }, 00:14:19.763 "block_size": 4096, 00:14:19.763 "claimed": false, 00:14:19.763 "driver_specific": { 00:14:19.763 "mp_policy": "active_passive", 00:14:19.763 "nvme": [ 00:14:19.763 { 00:14:19.763 "ctrlr_data": { 00:14:19.763 "ana_reporting": false, 00:14:19.763 "cntlid": 1, 00:14:19.763 "firmware_revision": "24.01.1", 00:14:19.763 "model_number": "SPDK bdev Controller", 00:14:19.763 "multi_ctrlr": true, 00:14:19.763 "oacs": { 00:14:19.763 "firmware": 0, 00:14:19.763 "format": 0, 00:14:19.763 "ns_manage": 0, 00:14:19.763 "security": 0 00:14:19.763 }, 00:14:19.764 "serial_number": "SPDK0", 00:14:19.764 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:19.764 "vendor_id": "0x8086" 00:14:19.764 }, 00:14:19.764 "ns_data": { 00:14:19.764 "can_share": true, 00:14:19.764 "id": 1 00:14:19.764 }, 00:14:19.764 "trid": { 00:14:19.764 "adrfam": "IPv4", 00:14:19.764 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:19.764 "traddr": "10.0.0.2", 00:14:19.764 "trsvcid": "4420", 00:14:19.764 "trtype": "TCP" 00:14:19.764 }, 00:14:19.764 "vs": { 00:14:19.764 "nvme_version": "1.3" 00:14:19.764 } 00:14:19.764 } 00:14:19.764 ] 00:14:19.764 }, 00:14:19.764 "name": "Nvme0n1", 00:14:19.764 "num_blocks": 38912, 00:14:19.764 "product_name": "NVMe disk", 00:14:19.764 "supported_io_types": { 00:14:19.764 "abort": true, 00:14:19.764 "compare": true, 00:14:19.764 "compare_and_write": true, 00:14:19.764 "flush": true, 00:14:19.764 "nvme_admin": true, 00:14:19.764 "nvme_io": true, 00:14:19.764 "read": true, 00:14:19.764 "reset": true, 00:14:19.764 "unmap": true, 00:14:19.764 "write": true, 00:14:19.764 "write_zeroes": true 00:14:19.764 }, 00:14:19.764 "uuid": "2c6067e4-9cc3-4568-a202-d0b16e3e7569", 00:14:19.764 "zoned": false 00:14:19.764 } 00:14:19.764 ] 00:14:19.764 02:11:34 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=71427 00:14:19.764 02:11:34 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:19.764 02:11:34 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:20.022 Running I/O for 10 seconds... 00:14:20.958 Latency(us) 00:14:20.958 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:20.958 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:20.958 Nvme0n1 : 1.00 8316.00 32.48 0.00 0.00 0.00 0.00 0.00 00:14:20.958 =================================================================================================================== 00:14:20.958 Total : 8316.00 32.48 0.00 0.00 0.00 0.00 0.00 00:14:20.958 00:14:21.891 02:11:36 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 618bbd99-a084-4c98-bda9-9475ca49139e 00:14:21.891 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:21.891 Nvme0n1 : 2.00 8270.00 32.30 0.00 0.00 0.00 0.00 0.00 00:14:21.891 =================================================================================================================== 00:14:21.891 Total : 8270.00 32.30 0.00 0.00 0.00 0.00 0.00 00:14:21.891 00:14:22.150 true 00:14:22.150 02:11:36 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 618bbd99-a084-4c98-bda9-9475ca49139e 00:14:22.150 02:11:36 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:22.408 02:11:36 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:22.408 02:11:36 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:22.408 02:11:36 -- target/nvmf_lvs_grow.sh@65 -- # wait 71427 00:14:22.976 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:22.976 Nvme0n1 : 3.00 8347.33 32.61 0.00 0.00 0.00 0.00 0.00 00:14:22.976 =================================================================================================================== 00:14:22.976 Total : 8347.33 32.61 0.00 0.00 0.00 0.00 0.00 00:14:22.976 00:14:23.936 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:23.936 Nvme0n1 : 4.00 8379.75 32.73 0.00 0.00 0.00 0.00 0.00 00:14:23.936 =================================================================================================================== 00:14:23.936 Total : 8379.75 32.73 0.00 0.00 0.00 0.00 0.00 00:14:23.936 00:14:24.870 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:24.870 Nvme0n1 : 5.00 8356.60 32.64 0.00 0.00 0.00 0.00 0.00 00:14:24.870 =================================================================================================================== 00:14:24.870 Total : 8356.60 32.64 0.00 0.00 0.00 0.00 0.00 00:14:24.870 00:14:26.244 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:26.244 Nvme0n1 : 6.00 8340.00 32.58 0.00 0.00 0.00 0.00 0.00 00:14:26.244 =================================================================================================================== 00:14:26.244 Total : 8340.00 32.58 0.00 0.00 0.00 0.00 0.00 00:14:26.244 00:14:27.179 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:27.179 Nvme0n1 : 7.00 8327.71 32.53 0.00 0.00 0.00 0.00 0.00 00:14:27.179 =================================================================================================================== 00:14:27.179 Total : 8327.71 32.53 0.00 0.00 0.00 0.00 0.00 00:14:27.179 00:14:28.112 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:28.112 Nvme0n1 : 8.00 8294.00 32.40 0.00 0.00 0.00 0.00 0.00 00:14:28.112 =================================================================================================================== 00:14:28.112 Total : 8294.00 32.40 0.00 0.00 0.00 0.00 0.00 00:14:28.112 00:14:29.047 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:29.047 Nvme0n1 : 9.00 8299.22 32.42 0.00 0.00 0.00 0.00 0.00 00:14:29.047 =================================================================================================================== 00:14:29.047 Total : 8299.22 32.42 0.00 0.00 0.00 0.00 0.00 00:14:29.047 00:14:29.981 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:29.981 Nvme0n1 : 10.00 8302.60 32.43 0.00 0.00 0.00 0.00 0.00 00:14:29.981 =================================================================================================================== 00:14:29.981 Total : 8302.60 32.43 0.00 0.00 0.00 0.00 0.00 00:14:29.981 00:14:29.981 00:14:29.981 Latency(us) 00:14:29.981 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:29.981 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:29.981 Nvme0n1 : 10.01 8307.67 32.45 0.00 0.00 15398.41 7268.54 31457.28 00:14:29.981 =================================================================================================================== 00:14:29.981 Total : 8307.67 32.45 0.00 0.00 15398.41 7268.54 31457.28 00:14:29.981 0 00:14:29.981 02:11:44 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 71385 00:14:29.981 02:11:44 -- common/autotest_common.sh@926 -- # '[' -z 71385 ']' 00:14:29.981 02:11:44 -- common/autotest_common.sh@930 -- # kill -0 71385 00:14:29.981 02:11:44 -- common/autotest_common.sh@931 -- # uname 00:14:29.981 02:11:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:29.981 02:11:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71385 00:14:29.981 killing process with pid 71385 00:14:29.981 Received shutdown signal, test time was about 10.000000 seconds 00:14:29.981 00:14:29.981 Latency(us) 00:14:29.981 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:29.981 =================================================================================================================== 00:14:29.981 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:29.981 02:11:44 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:29.981 02:11:44 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:29.981 02:11:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71385' 00:14:29.981 02:11:44 -- common/autotest_common.sh@945 -- # kill 71385 00:14:29.982 02:11:44 -- common/autotest_common.sh@950 -- # wait 71385 00:14:30.240 02:11:44 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:30.498 02:11:44 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 618bbd99-a084-4c98-bda9-9475ca49139e 00:14:30.498 02:11:44 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:14:30.774 02:11:45 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:14:30.774 02:11:45 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:14:30.774 02:11:45 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:31.037 [2024-05-14 02:11:45.519661] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:31.037 02:11:45 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 618bbd99-a084-4c98-bda9-9475ca49139e 00:14:31.037 02:11:45 -- common/autotest_common.sh@640 -- # local es=0 00:14:31.037 02:11:45 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 618bbd99-a084-4c98-bda9-9475ca49139e 00:14:31.037 02:11:45 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:31.037 02:11:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:31.037 02:11:45 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:31.037 02:11:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:31.037 02:11:45 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:31.037 02:11:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:31.037 02:11:45 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:31.037 02:11:45 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:31.037 02:11:45 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 618bbd99-a084-4c98-bda9-9475ca49139e 00:14:31.295 2024/05/14 02:11:45 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:618bbd99-a084-4c98-bda9-9475ca49139e], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:14:31.295 request: 00:14:31.295 { 00:14:31.295 "method": "bdev_lvol_get_lvstores", 00:14:31.295 "params": { 00:14:31.295 "uuid": "618bbd99-a084-4c98-bda9-9475ca49139e" 00:14:31.295 } 00:14:31.295 } 00:14:31.295 Got JSON-RPC error response 00:14:31.295 GoRPCClient: error on JSON-RPC call 00:14:31.295 02:11:45 -- common/autotest_common.sh@643 -- # es=1 00:14:31.295 02:11:45 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:14:31.295 02:11:45 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:14:31.295 02:11:45 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:14:31.295 02:11:45 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:31.554 aio_bdev 00:14:31.554 02:11:46 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 2c6067e4-9cc3-4568-a202-d0b16e3e7569 00:14:31.554 02:11:46 -- common/autotest_common.sh@887 -- # local bdev_name=2c6067e4-9cc3-4568-a202-d0b16e3e7569 00:14:31.554 02:11:46 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:31.554 02:11:46 -- common/autotest_common.sh@889 -- # local i 00:14:31.554 02:11:46 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:31.554 02:11:46 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:31.554 02:11:46 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:32.121 02:11:46 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2c6067e4-9cc3-4568-a202-d0b16e3e7569 -t 2000 00:14:32.121 [ 00:14:32.121 { 00:14:32.121 "aliases": [ 00:14:32.121 "lvs/lvol" 00:14:32.121 ], 00:14:32.121 "assigned_rate_limits": { 00:14:32.121 "r_mbytes_per_sec": 0, 00:14:32.121 "rw_ios_per_sec": 0, 00:14:32.121 "rw_mbytes_per_sec": 0, 00:14:32.121 "w_mbytes_per_sec": 0 00:14:32.121 }, 00:14:32.121 "block_size": 4096, 00:14:32.121 "claimed": false, 00:14:32.121 "driver_specific": { 00:14:32.121 "lvol": { 00:14:32.121 "base_bdev": "aio_bdev", 00:14:32.121 "clone": false, 00:14:32.121 "esnap_clone": false, 00:14:32.121 "lvol_store_uuid": "618bbd99-a084-4c98-bda9-9475ca49139e", 00:14:32.121 "snapshot": false, 00:14:32.121 "thin_provision": false 00:14:32.121 } 00:14:32.121 }, 00:14:32.121 "name": "2c6067e4-9cc3-4568-a202-d0b16e3e7569", 00:14:32.121 "num_blocks": 38912, 00:14:32.121 "product_name": "Logical Volume", 00:14:32.121 "supported_io_types": { 00:14:32.121 "abort": false, 00:14:32.121 "compare": false, 00:14:32.121 "compare_and_write": false, 00:14:32.121 "flush": false, 00:14:32.121 "nvme_admin": false, 00:14:32.121 "nvme_io": false, 00:14:32.121 "read": true, 00:14:32.121 "reset": true, 00:14:32.121 "unmap": true, 00:14:32.121 "write": true, 00:14:32.121 "write_zeroes": true 00:14:32.121 }, 00:14:32.121 "uuid": "2c6067e4-9cc3-4568-a202-d0b16e3e7569", 00:14:32.121 "zoned": false 00:14:32.121 } 00:14:32.121 ] 00:14:32.121 02:11:46 -- common/autotest_common.sh@895 -- # return 0 00:14:32.121 02:11:46 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 618bbd99-a084-4c98-bda9-9475ca49139e 00:14:32.121 02:11:46 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:14:32.379 02:11:46 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:14:32.379 02:11:46 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 618bbd99-a084-4c98-bda9-9475ca49139e 00:14:32.379 02:11:46 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:14:32.638 02:11:47 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:14:32.638 02:11:47 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 2c6067e4-9cc3-4568-a202-d0b16e3e7569 00:14:33.206 02:11:47 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 618bbd99-a084-4c98-bda9-9475ca49139e 00:14:33.206 02:11:47 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:33.834 02:11:48 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:34.093 ************************************ 00:14:34.093 END TEST lvs_grow_clean 00:14:34.093 ************************************ 00:14:34.093 00:14:34.093 real 0m18.254s 00:14:34.093 user 0m17.700s 00:14:34.093 sys 0m2.070s 00:14:34.093 02:11:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:34.093 02:11:48 -- common/autotest_common.sh@10 -- # set +x 00:14:34.093 02:11:48 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:34.093 02:11:48 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:34.093 02:11:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:34.093 02:11:48 -- common/autotest_common.sh@10 -- # set +x 00:14:34.093 ************************************ 00:14:34.093 START TEST lvs_grow_dirty 00:14:34.093 ************************************ 00:14:34.093 02:11:48 -- common/autotest_common.sh@1104 -- # lvs_grow dirty 00:14:34.093 02:11:48 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:34.093 02:11:48 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:34.093 02:11:48 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:34.093 02:11:48 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:34.093 02:11:48 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:34.093 02:11:48 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:34.093 02:11:48 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:34.093 02:11:48 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:34.093 02:11:48 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:34.351 02:11:48 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:34.351 02:11:48 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:34.610 02:11:49 -- target/nvmf_lvs_grow.sh@28 -- # lvs=b0a8cc62-ad56-4ac9-b38e-e2bfb87f635a 00:14:34.610 02:11:49 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b0a8cc62-ad56-4ac9-b38e-e2bfb87f635a 00:14:34.610 02:11:49 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:34.868 02:11:49 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:34.868 02:11:49 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:34.868 02:11:49 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b0a8cc62-ad56-4ac9-b38e-e2bfb87f635a lvol 150 00:14:35.126 02:11:49 -- target/nvmf_lvs_grow.sh@33 -- # lvol=7863cb7f-0b5b-44ee-8445-e5e4ca3f3fbc 00:14:35.126 02:11:49 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:35.126 02:11:49 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:35.385 [2024-05-14 02:11:49.962679] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:35.385 [2024-05-14 02:11:49.962762] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:35.385 true 00:14:35.643 02:11:49 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b0a8cc62-ad56-4ac9-b38e-e2bfb87f635a 00:14:35.643 02:11:49 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:35.902 02:11:50 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:35.902 02:11:50 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:36.160 02:11:50 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7863cb7f-0b5b-44ee-8445-e5e4ca3f3fbc 00:14:36.418 02:11:50 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:36.418 02:11:51 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:36.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:36.985 02:11:51 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:36.985 02:11:51 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=71819 00:14:36.985 02:11:51 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:36.985 02:11:51 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 71819 /var/tmp/bdevperf.sock 00:14:36.985 02:11:51 -- common/autotest_common.sh@819 -- # '[' -z 71819 ']' 00:14:36.985 02:11:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:36.985 02:11:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:36.985 02:11:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:36.985 02:11:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:36.986 02:11:51 -- common/autotest_common.sh@10 -- # set +x 00:14:36.986 [2024-05-14 02:11:51.314447] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:14:36.986 [2024-05-14 02:11:51.314526] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71819 ] 00:14:36.986 [2024-05-14 02:11:51.449485] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.986 [2024-05-14 02:11:51.516435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:37.921 02:11:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:37.921 02:11:52 -- common/autotest_common.sh@852 -- # return 0 00:14:37.921 02:11:52 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:38.179 Nvme0n1 00:14:38.179 02:11:52 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:38.438 [ 00:14:38.438 { 00:14:38.438 "aliases": [ 00:14:38.438 "7863cb7f-0b5b-44ee-8445-e5e4ca3f3fbc" 00:14:38.438 ], 00:14:38.438 "assigned_rate_limits": { 00:14:38.438 "r_mbytes_per_sec": 0, 00:14:38.438 "rw_ios_per_sec": 0, 00:14:38.438 "rw_mbytes_per_sec": 0, 00:14:38.438 "w_mbytes_per_sec": 0 00:14:38.438 }, 00:14:38.438 "block_size": 4096, 00:14:38.438 "claimed": false, 00:14:38.438 "driver_specific": { 00:14:38.438 "mp_policy": "active_passive", 00:14:38.438 "nvme": [ 00:14:38.438 { 00:14:38.438 "ctrlr_data": { 00:14:38.438 "ana_reporting": false, 00:14:38.438 "cntlid": 1, 00:14:38.438 "firmware_revision": "24.01.1", 00:14:38.438 "model_number": "SPDK bdev Controller", 00:14:38.438 "multi_ctrlr": true, 00:14:38.438 "oacs": { 00:14:38.438 "firmware": 0, 00:14:38.438 "format": 0, 00:14:38.438 "ns_manage": 0, 00:14:38.438 "security": 0 00:14:38.438 }, 00:14:38.438 "serial_number": "SPDK0", 00:14:38.438 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:38.438 "vendor_id": "0x8086" 00:14:38.438 }, 00:14:38.438 "ns_data": { 00:14:38.438 "can_share": true, 00:14:38.438 "id": 1 00:14:38.438 }, 00:14:38.438 "trid": { 00:14:38.438 "adrfam": "IPv4", 00:14:38.438 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:38.438 "traddr": "10.0.0.2", 00:14:38.438 "trsvcid": "4420", 00:14:38.438 "trtype": "TCP" 00:14:38.438 }, 00:14:38.438 "vs": { 00:14:38.438 "nvme_version": "1.3" 00:14:38.438 } 00:14:38.438 } 00:14:38.438 ] 00:14:38.438 }, 00:14:38.438 "name": "Nvme0n1", 00:14:38.438 "num_blocks": 38912, 00:14:38.438 "product_name": "NVMe disk", 00:14:38.438 "supported_io_types": { 00:14:38.438 "abort": true, 00:14:38.438 "compare": true, 00:14:38.438 "compare_and_write": true, 00:14:38.438 "flush": true, 00:14:38.438 "nvme_admin": true, 00:14:38.438 "nvme_io": true, 00:14:38.438 "read": true, 00:14:38.438 "reset": true, 00:14:38.438 "unmap": true, 00:14:38.438 "write": true, 00:14:38.438 "write_zeroes": true 00:14:38.438 }, 00:14:38.438 "uuid": "7863cb7f-0b5b-44ee-8445-e5e4ca3f3fbc", 00:14:38.438 "zoned": false 00:14:38.438 } 00:14:38.438 ] 00:14:38.438 02:11:52 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=71865 00:14:38.438 02:11:52 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:38.438 02:11:52 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:38.438 Running I/O for 10 seconds... 00:14:39.813 Latency(us) 00:14:39.813 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:39.813 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:39.813 Nvme0n1 : 1.00 8320.00 32.50 0.00 0.00 0.00 0.00 0.00 00:14:39.813 =================================================================================================================== 00:14:39.813 Total : 8320.00 32.50 0.00 0.00 0.00 0.00 0.00 00:14:39.813 00:14:40.379 02:11:54 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b0a8cc62-ad56-4ac9-b38e-e2bfb87f635a 00:14:40.637 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:40.637 Nvme0n1 : 2.00 8267.50 32.29 0.00 0.00 0.00 0.00 0.00 00:14:40.637 =================================================================================================================== 00:14:40.637 Total : 8267.50 32.29 0.00 0.00 0.00 0.00 0.00 00:14:40.637 00:14:40.637 true 00:14:40.637 02:11:55 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b0a8cc62-ad56-4ac9-b38e-e2bfb87f635a 00:14:40.637 02:11:55 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:40.895 02:11:55 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:40.895 02:11:55 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:40.895 02:11:55 -- target/nvmf_lvs_grow.sh@65 -- # wait 71865 00:14:41.461 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:41.461 Nvme0n1 : 3.00 8292.33 32.39 0.00 0.00 0.00 0.00 0.00 00:14:41.461 =================================================================================================================== 00:14:41.461 Total : 8292.33 32.39 0.00 0.00 0.00 0.00 0.00 00:14:41.461 00:14:42.395 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:42.395 Nvme0n1 : 4.00 8307.25 32.45 0.00 0.00 0.00 0.00 0.00 00:14:42.395 =================================================================================================================== 00:14:42.395 Total : 8307.25 32.45 0.00 0.00 0.00 0.00 0.00 00:14:42.395 00:14:43.768 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:43.768 Nvme0n1 : 5.00 8285.00 32.36 0.00 0.00 0.00 0.00 0.00 00:14:43.768 =================================================================================================================== 00:14:43.768 Total : 8285.00 32.36 0.00 0.00 0.00 0.00 0.00 00:14:43.768 00:14:44.701 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:44.701 Nvme0n1 : 6.00 8277.00 32.33 0.00 0.00 0.00 0.00 0.00 00:14:44.701 =================================================================================================================== 00:14:44.701 Total : 8277.00 32.33 0.00 0.00 0.00 0.00 0.00 00:14:44.701 00:14:45.636 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:45.636 Nvme0n1 : 7.00 8205.57 32.05 0.00 0.00 0.00 0.00 0.00 00:14:45.636 =================================================================================================================== 00:14:45.636 Total : 8205.57 32.05 0.00 0.00 0.00 0.00 0.00 00:14:45.636 00:14:46.570 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:46.570 Nvme0n1 : 8.00 7837.00 30.61 0.00 0.00 0.00 0.00 0.00 00:14:46.570 =================================================================================================================== 00:14:46.570 Total : 7837.00 30.61 0.00 0.00 0.00 0.00 0.00 00:14:46.570 00:14:47.536 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:47.536 Nvme0n1 : 9.00 7820.33 30.55 0.00 0.00 0.00 0.00 0.00 00:14:47.536 =================================================================================================================== 00:14:47.536 Total : 7820.33 30.55 0.00 0.00 0.00 0.00 0.00 00:14:47.536 00:14:48.470 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:48.470 Nvme0n1 : 10.00 7813.30 30.52 0.00 0.00 0.00 0.00 0.00 00:14:48.470 =================================================================================================================== 00:14:48.470 Total : 7813.30 30.52 0.00 0.00 0.00 0.00 0.00 00:14:48.470 00:14:48.470 00:14:48.470 Latency(us) 00:14:48.471 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:48.471 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:48.471 Nvme0n1 : 10.01 7821.52 30.55 0.00 0.00 16360.65 3872.58 379393.86 00:14:48.471 =================================================================================================================== 00:14:48.471 Total : 7821.52 30.55 0.00 0.00 16360.65 3872.58 379393.86 00:14:48.471 0 00:14:48.471 02:12:02 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 71819 00:14:48.471 02:12:02 -- common/autotest_common.sh@926 -- # '[' -z 71819 ']' 00:14:48.471 02:12:02 -- common/autotest_common.sh@930 -- # kill -0 71819 00:14:48.471 02:12:03 -- common/autotest_common.sh@931 -- # uname 00:14:48.471 02:12:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:48.471 02:12:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71819 00:14:48.471 killing process with pid 71819 00:14:48.471 Received shutdown signal, test time was about 10.000000 seconds 00:14:48.471 00:14:48.471 Latency(us) 00:14:48.471 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:48.471 =================================================================================================================== 00:14:48.471 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:48.471 02:12:03 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:48.471 02:12:03 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:48.471 02:12:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71819' 00:14:48.471 02:12:03 -- common/autotest_common.sh@945 -- # kill 71819 00:14:48.471 02:12:03 -- common/autotest_common.sh@950 -- # wait 71819 00:14:48.729 02:12:03 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:48.988 02:12:03 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b0a8cc62-ad56-4ac9-b38e-e2bfb87f635a 00:14:48.988 02:12:03 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:14:49.245 02:12:03 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:14:49.245 02:12:03 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:14:49.245 02:12:03 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 71218 00:14:49.245 02:12:03 -- target/nvmf_lvs_grow.sh@74 -- # wait 71218 00:14:49.245 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 71218 Killed "${NVMF_APP[@]}" "$@" 00:14:49.245 02:12:03 -- target/nvmf_lvs_grow.sh@74 -- # true 00:14:49.245 02:12:03 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:14:49.245 02:12:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:49.245 02:12:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:49.245 02:12:03 -- common/autotest_common.sh@10 -- # set +x 00:14:49.245 02:12:03 -- nvmf/common.sh@469 -- # nvmfpid=72017 00:14:49.245 02:12:03 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:49.246 02:12:03 -- nvmf/common.sh@470 -- # waitforlisten 72017 00:14:49.246 02:12:03 -- common/autotest_common.sh@819 -- # '[' -z 72017 ']' 00:14:49.246 02:12:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:49.246 02:12:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:49.246 02:12:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:49.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:49.246 02:12:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:49.246 02:12:03 -- common/autotest_common.sh@10 -- # set +x 00:14:49.504 [2024-05-14 02:12:03.874137] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:14:49.504 [2024-05-14 02:12:03.874253] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:49.504 [2024-05-14 02:12:04.016579] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.504 [2024-05-14 02:12:04.077100] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:49.504 [2024-05-14 02:12:04.077253] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:49.504 [2024-05-14 02:12:04.077267] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:49.504 [2024-05-14 02:12:04.077277] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:49.504 [2024-05-14 02:12:04.077310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.440 02:12:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:50.440 02:12:04 -- common/autotest_common.sh@852 -- # return 0 00:14:50.440 02:12:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:50.440 02:12:04 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:50.440 02:12:04 -- common/autotest_common.sh@10 -- # set +x 00:14:50.440 02:12:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:50.440 02:12:04 -- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:50.698 [2024-05-14 02:12:05.103659] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:14:50.698 [2024-05-14 02:12:05.103941] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:14:50.698 [2024-05-14 02:12:05.104144] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:14:50.698 02:12:05 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:14:50.698 02:12:05 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 7863cb7f-0b5b-44ee-8445-e5e4ca3f3fbc 00:14:50.698 02:12:05 -- common/autotest_common.sh@887 -- # local bdev_name=7863cb7f-0b5b-44ee-8445-e5e4ca3f3fbc 00:14:50.698 02:12:05 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:50.698 02:12:05 -- common/autotest_common.sh@889 -- # local i 00:14:50.698 02:12:05 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:50.698 02:12:05 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:50.698 02:12:05 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:50.957 02:12:05 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7863cb7f-0b5b-44ee-8445-e5e4ca3f3fbc -t 2000 00:14:51.216 [ 00:14:51.216 { 00:14:51.216 "aliases": [ 00:14:51.216 "lvs/lvol" 00:14:51.216 ], 00:14:51.216 "assigned_rate_limits": { 00:14:51.216 "r_mbytes_per_sec": 0, 00:14:51.216 "rw_ios_per_sec": 0, 00:14:51.216 "rw_mbytes_per_sec": 0, 00:14:51.216 "w_mbytes_per_sec": 0 00:14:51.216 }, 00:14:51.216 "block_size": 4096, 00:14:51.216 "claimed": false, 00:14:51.216 "driver_specific": { 00:14:51.216 "lvol": { 00:14:51.216 "base_bdev": "aio_bdev", 00:14:51.216 "clone": false, 00:14:51.216 "esnap_clone": false, 00:14:51.216 "lvol_store_uuid": "b0a8cc62-ad56-4ac9-b38e-e2bfb87f635a", 00:14:51.216 "snapshot": false, 00:14:51.216 "thin_provision": false 00:14:51.216 } 00:14:51.216 }, 00:14:51.216 "name": "7863cb7f-0b5b-44ee-8445-e5e4ca3f3fbc", 00:14:51.216 "num_blocks": 38912, 00:14:51.216 "product_name": "Logical Volume", 00:14:51.216 "supported_io_types": { 00:14:51.216 "abort": false, 00:14:51.216 "compare": false, 00:14:51.216 "compare_and_write": false, 00:14:51.216 "flush": false, 00:14:51.216 "nvme_admin": false, 00:14:51.216 "nvme_io": false, 00:14:51.216 "read": true, 00:14:51.216 "reset": true, 00:14:51.216 "unmap": true, 00:14:51.216 "write": true, 00:14:51.216 "write_zeroes": true 00:14:51.216 }, 00:14:51.216 "uuid": "7863cb7f-0b5b-44ee-8445-e5e4ca3f3fbc", 00:14:51.216 "zoned": false 00:14:51.216 } 00:14:51.216 ] 00:14:51.216 02:12:05 -- common/autotest_common.sh@895 -- # return 0 00:14:51.216 02:12:05 -- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b0a8cc62-ad56-4ac9-b38e-e2bfb87f635a 00:14:51.216 02:12:05 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:14:51.474 02:12:05 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:14:51.474 02:12:05 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b0a8cc62-ad56-4ac9-b38e-e2bfb87f635a 00:14:51.474 02:12:05 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:14:51.732 02:12:06 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:14:51.732 02:12:06 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:51.990 [2024-05-14 02:12:06.425299] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:51.990 02:12:06 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b0a8cc62-ad56-4ac9-b38e-e2bfb87f635a 00:14:51.990 02:12:06 -- common/autotest_common.sh@640 -- # local es=0 00:14:51.990 02:12:06 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b0a8cc62-ad56-4ac9-b38e-e2bfb87f635a 00:14:51.990 02:12:06 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:51.990 02:12:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:51.990 02:12:06 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:51.990 02:12:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:51.990 02:12:06 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:51.990 02:12:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:51.990 02:12:06 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:51.990 02:12:06 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:51.990 02:12:06 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b0a8cc62-ad56-4ac9-b38e-e2bfb87f635a 00:14:52.249 2024/05/14 02:12:06 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:b0a8cc62-ad56-4ac9-b38e-e2bfb87f635a], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:14:52.249 request: 00:14:52.249 { 00:14:52.249 "method": "bdev_lvol_get_lvstores", 00:14:52.249 "params": { 00:14:52.249 "uuid": "b0a8cc62-ad56-4ac9-b38e-e2bfb87f635a" 00:14:52.249 } 00:14:52.249 } 00:14:52.249 Got JSON-RPC error response 00:14:52.249 GoRPCClient: error on JSON-RPC call 00:14:52.249 02:12:06 -- common/autotest_common.sh@643 -- # es=1 00:14:52.249 02:12:06 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:14:52.249 02:12:06 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:14:52.249 02:12:06 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:14:52.249 02:12:06 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:52.508 aio_bdev 00:14:52.508 02:12:07 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 7863cb7f-0b5b-44ee-8445-e5e4ca3f3fbc 00:14:52.509 02:12:07 -- common/autotest_common.sh@887 -- # local bdev_name=7863cb7f-0b5b-44ee-8445-e5e4ca3f3fbc 00:14:52.509 02:12:07 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:52.509 02:12:07 -- common/autotest_common.sh@889 -- # local i 00:14:52.509 02:12:07 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:52.509 02:12:07 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:52.509 02:12:07 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:52.767 02:12:07 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7863cb7f-0b5b-44ee-8445-e5e4ca3f3fbc -t 2000 00:14:53.025 [ 00:14:53.025 { 00:14:53.025 "aliases": [ 00:14:53.025 "lvs/lvol" 00:14:53.025 ], 00:14:53.025 "assigned_rate_limits": { 00:14:53.025 "r_mbytes_per_sec": 0, 00:14:53.025 "rw_ios_per_sec": 0, 00:14:53.025 "rw_mbytes_per_sec": 0, 00:14:53.025 "w_mbytes_per_sec": 0 00:14:53.025 }, 00:14:53.025 "block_size": 4096, 00:14:53.025 "claimed": false, 00:14:53.025 "driver_specific": { 00:14:53.025 "lvol": { 00:14:53.025 "base_bdev": "aio_bdev", 00:14:53.025 "clone": false, 00:14:53.025 "esnap_clone": false, 00:14:53.025 "lvol_store_uuid": "b0a8cc62-ad56-4ac9-b38e-e2bfb87f635a", 00:14:53.025 "snapshot": false, 00:14:53.025 "thin_provision": false 00:14:53.025 } 00:14:53.025 }, 00:14:53.025 "name": "7863cb7f-0b5b-44ee-8445-e5e4ca3f3fbc", 00:14:53.025 "num_blocks": 38912, 00:14:53.025 "product_name": "Logical Volume", 00:14:53.025 "supported_io_types": { 00:14:53.025 "abort": false, 00:14:53.025 "compare": false, 00:14:53.025 "compare_and_write": false, 00:14:53.025 "flush": false, 00:14:53.025 "nvme_admin": false, 00:14:53.025 "nvme_io": false, 00:14:53.025 "read": true, 00:14:53.025 "reset": true, 00:14:53.025 "unmap": true, 00:14:53.025 "write": true, 00:14:53.025 "write_zeroes": true 00:14:53.025 }, 00:14:53.025 "uuid": "7863cb7f-0b5b-44ee-8445-e5e4ca3f3fbc", 00:14:53.025 "zoned": false 00:14:53.025 } 00:14:53.025 ] 00:14:53.025 02:12:07 -- common/autotest_common.sh@895 -- # return 0 00:14:53.025 02:12:07 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b0a8cc62-ad56-4ac9-b38e-e2bfb87f635a 00:14:53.025 02:12:07 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:14:53.283 02:12:07 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:14:53.283 02:12:07 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b0a8cc62-ad56-4ac9-b38e-e2bfb87f635a 00:14:53.283 02:12:07 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:14:53.542 02:12:08 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:14:53.542 02:12:08 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 7863cb7f-0b5b-44ee-8445-e5e4ca3f3fbc 00:14:54.108 02:12:08 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b0a8cc62-ad56-4ac9-b38e-e2bfb87f635a 00:14:54.367 02:12:08 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:54.626 02:12:08 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:54.884 ************************************ 00:14:54.884 END TEST lvs_grow_dirty 00:14:54.884 ************************************ 00:14:54.884 00:14:54.884 real 0m20.841s 00:14:54.884 user 0m43.513s 00:14:54.884 sys 0m7.557s 00:14:54.884 02:12:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:54.884 02:12:09 -- common/autotest_common.sh@10 -- # set +x 00:14:54.884 02:12:09 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:14:54.884 02:12:09 -- common/autotest_common.sh@796 -- # type=--id 00:14:54.884 02:12:09 -- common/autotest_common.sh@797 -- # id=0 00:14:54.884 02:12:09 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:14:54.884 02:12:09 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:54.884 02:12:09 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:14:54.884 02:12:09 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:14:54.884 02:12:09 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:14:54.884 02:12:09 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:54.884 nvmf_trace.0 00:14:54.884 02:12:09 -- common/autotest_common.sh@811 -- # return 0 00:14:54.884 02:12:09 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:14:54.884 02:12:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:54.884 02:12:09 -- nvmf/common.sh@116 -- # sync 00:14:55.143 02:12:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:55.143 02:12:09 -- nvmf/common.sh@119 -- # set +e 00:14:55.143 02:12:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:55.143 02:12:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:55.143 rmmod nvme_tcp 00:14:55.143 rmmod nvme_fabrics 00:14:55.143 rmmod nvme_keyring 00:14:55.143 02:12:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:55.143 02:12:09 -- nvmf/common.sh@123 -- # set -e 00:14:55.143 02:12:09 -- nvmf/common.sh@124 -- # return 0 00:14:55.143 02:12:09 -- nvmf/common.sh@477 -- # '[' -n 72017 ']' 00:14:55.143 02:12:09 -- nvmf/common.sh@478 -- # killprocess 72017 00:14:55.143 02:12:09 -- common/autotest_common.sh@926 -- # '[' -z 72017 ']' 00:14:55.143 02:12:09 -- common/autotest_common.sh@930 -- # kill -0 72017 00:14:55.143 02:12:09 -- common/autotest_common.sh@931 -- # uname 00:14:55.143 02:12:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:55.143 02:12:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 72017 00:14:55.143 02:12:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:55.143 02:12:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:55.143 02:12:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 72017' 00:14:55.143 killing process with pid 72017 00:14:55.143 02:12:09 -- common/autotest_common.sh@945 -- # kill 72017 00:14:55.143 02:12:09 -- common/autotest_common.sh@950 -- # wait 72017 00:14:55.401 02:12:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:55.401 02:12:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:55.401 02:12:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:55.401 02:12:09 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:55.401 02:12:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:55.401 02:12:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:55.401 02:12:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:55.401 02:12:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:55.401 02:12:09 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:55.401 00:14:55.401 real 0m41.515s 00:14:55.401 user 1m7.857s 00:14:55.401 sys 0m10.335s 00:14:55.401 02:12:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:55.401 02:12:09 -- common/autotest_common.sh@10 -- # set +x 00:14:55.401 ************************************ 00:14:55.401 END TEST nvmf_lvs_grow 00:14:55.401 ************************************ 00:14:55.401 02:12:09 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:55.401 02:12:09 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:55.401 02:12:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:55.401 02:12:09 -- common/autotest_common.sh@10 -- # set +x 00:14:55.401 ************************************ 00:14:55.402 START TEST nvmf_bdev_io_wait 00:14:55.402 ************************************ 00:14:55.402 02:12:09 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:55.402 * Looking for test storage... 00:14:55.402 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:55.402 02:12:09 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:55.402 02:12:09 -- nvmf/common.sh@7 -- # uname -s 00:14:55.661 02:12:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:55.661 02:12:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:55.661 02:12:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:55.661 02:12:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:55.661 02:12:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:55.661 02:12:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:55.661 02:12:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:55.661 02:12:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:55.661 02:12:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:55.661 02:12:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:55.661 02:12:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:14:55.661 02:12:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:14:55.661 02:12:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:55.661 02:12:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:55.661 02:12:09 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:55.661 02:12:09 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:55.661 02:12:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:55.661 02:12:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:55.661 02:12:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:55.661 02:12:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.661 02:12:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.661 02:12:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.661 02:12:10 -- paths/export.sh@5 -- # export PATH 00:14:55.661 02:12:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.661 02:12:10 -- nvmf/common.sh@46 -- # : 0 00:14:55.661 02:12:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:55.661 02:12:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:55.661 02:12:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:55.661 02:12:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:55.661 02:12:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:55.661 02:12:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:55.661 02:12:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:55.661 02:12:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:55.661 02:12:10 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:55.661 02:12:10 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:55.661 02:12:10 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:14:55.661 02:12:10 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:55.661 02:12:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:55.661 02:12:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:55.661 02:12:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:55.661 02:12:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:55.661 02:12:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:55.661 02:12:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:55.661 02:12:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:55.661 02:12:10 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:55.661 02:12:10 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:55.661 02:12:10 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:55.661 02:12:10 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:55.661 02:12:10 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:55.661 02:12:10 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:55.661 02:12:10 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:55.661 02:12:10 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:55.661 02:12:10 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:55.661 02:12:10 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:55.661 02:12:10 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:55.661 02:12:10 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:55.661 02:12:10 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:55.661 02:12:10 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:55.661 02:12:10 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:55.661 02:12:10 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:55.661 02:12:10 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:55.661 02:12:10 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:55.661 02:12:10 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:55.661 02:12:10 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:55.661 Cannot find device "nvmf_tgt_br" 00:14:55.661 02:12:10 -- nvmf/common.sh@154 -- # true 00:14:55.661 02:12:10 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:55.661 Cannot find device "nvmf_tgt_br2" 00:14:55.661 02:12:10 -- nvmf/common.sh@155 -- # true 00:14:55.661 02:12:10 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:55.661 02:12:10 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:55.661 Cannot find device "nvmf_tgt_br" 00:14:55.661 02:12:10 -- nvmf/common.sh@157 -- # true 00:14:55.661 02:12:10 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:55.661 Cannot find device "nvmf_tgt_br2" 00:14:55.661 02:12:10 -- nvmf/common.sh@158 -- # true 00:14:55.661 02:12:10 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:55.661 02:12:10 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:55.661 02:12:10 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:55.661 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:55.661 02:12:10 -- nvmf/common.sh@161 -- # true 00:14:55.661 02:12:10 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:55.661 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:55.661 02:12:10 -- nvmf/common.sh@162 -- # true 00:14:55.661 02:12:10 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:55.661 02:12:10 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:55.661 02:12:10 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:55.661 02:12:10 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:55.661 02:12:10 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:55.661 02:12:10 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:55.661 02:12:10 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:55.661 02:12:10 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:55.661 02:12:10 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:55.661 02:12:10 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:55.920 02:12:10 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:55.920 02:12:10 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:55.920 02:12:10 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:55.920 02:12:10 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:55.920 02:12:10 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:55.920 02:12:10 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:55.920 02:12:10 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:55.920 02:12:10 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:55.920 02:12:10 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:55.920 02:12:10 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:55.920 02:12:10 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:55.920 02:12:10 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:55.920 02:12:10 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:55.920 02:12:10 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:55.920 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:55.920 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.115 ms 00:14:55.920 00:14:55.920 --- 10.0.0.2 ping statistics --- 00:14:55.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:55.920 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:14:55.920 02:12:10 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:55.920 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:55.920 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:14:55.920 00:14:55.920 --- 10.0.0.3 ping statistics --- 00:14:55.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:55.920 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:14:55.920 02:12:10 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:55.920 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:55.920 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:14:55.920 00:14:55.920 --- 10.0.0.1 ping statistics --- 00:14:55.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:55.920 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:14:55.920 02:12:10 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:55.920 02:12:10 -- nvmf/common.sh@421 -- # return 0 00:14:55.920 02:12:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:55.920 02:12:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:55.920 02:12:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:55.920 02:12:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:55.920 02:12:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:55.920 02:12:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:55.920 02:12:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:55.920 02:12:10 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:14:55.920 02:12:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:55.920 02:12:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:55.920 02:12:10 -- common/autotest_common.sh@10 -- # set +x 00:14:55.920 02:12:10 -- nvmf/common.sh@469 -- # nvmfpid=72433 00:14:55.920 02:12:10 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:14:55.920 02:12:10 -- nvmf/common.sh@470 -- # waitforlisten 72433 00:14:55.920 02:12:10 -- common/autotest_common.sh@819 -- # '[' -z 72433 ']' 00:14:55.920 02:12:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:55.920 02:12:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:55.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:55.920 02:12:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:55.920 02:12:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:55.920 02:12:10 -- common/autotest_common.sh@10 -- # set +x 00:14:55.920 [2024-05-14 02:12:10.455561] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:14:55.920 [2024-05-14 02:12:10.455657] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:56.179 [2024-05-14 02:12:10.596966] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:56.179 [2024-05-14 02:12:10.665255] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:56.179 [2024-05-14 02:12:10.665428] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:56.179 [2024-05-14 02:12:10.665444] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:56.179 [2024-05-14 02:12:10.665454] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:56.179 [2024-05-14 02:12:10.665577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:56.179 [2024-05-14 02:12:10.665739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:56.179 [2024-05-14 02:12:10.665848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:56.179 [2024-05-14 02:12:10.665854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:56.179 02:12:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:56.179 02:12:10 -- common/autotest_common.sh@852 -- # return 0 00:14:56.179 02:12:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:56.179 02:12:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:56.179 02:12:10 -- common/autotest_common.sh@10 -- # set +x 00:14:56.179 02:12:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:56.179 02:12:10 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:14:56.179 02:12:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:56.179 02:12:10 -- common/autotest_common.sh@10 -- # set +x 00:14:56.179 02:12:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:56.179 02:12:10 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:14:56.179 02:12:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:56.179 02:12:10 -- common/autotest_common.sh@10 -- # set +x 00:14:56.438 02:12:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:56.439 02:12:10 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:56.439 02:12:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:56.439 02:12:10 -- common/autotest_common.sh@10 -- # set +x 00:14:56.439 [2024-05-14 02:12:10.803426] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:56.439 02:12:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:56.439 02:12:10 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:56.439 02:12:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:56.439 02:12:10 -- common/autotest_common.sh@10 -- # set +x 00:14:56.439 Malloc0 00:14:56.439 02:12:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:56.439 02:12:10 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:56.439 02:12:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:56.439 02:12:10 -- common/autotest_common.sh@10 -- # set +x 00:14:56.439 02:12:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:56.439 02:12:10 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:56.439 02:12:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:56.439 02:12:10 -- common/autotest_common.sh@10 -- # set +x 00:14:56.439 02:12:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:56.439 02:12:10 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:56.439 02:12:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:56.439 02:12:10 -- common/autotest_common.sh@10 -- # set +x 00:14:56.439 [2024-05-14 02:12:10.849859] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:56.439 02:12:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:56.439 02:12:10 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=72467 00:14:56.439 02:12:10 -- target/bdev_io_wait.sh@30 -- # READ_PID=72469 00:14:56.439 02:12:10 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:14:56.439 02:12:10 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:14:56.439 02:12:10 -- nvmf/common.sh@520 -- # config=() 00:14:56.439 02:12:10 -- nvmf/common.sh@520 -- # local subsystem config 00:14:56.439 02:12:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:56.439 02:12:10 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:14:56.439 02:12:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:56.439 { 00:14:56.439 "params": { 00:14:56.439 "name": "Nvme$subsystem", 00:14:56.439 "trtype": "$TEST_TRANSPORT", 00:14:56.439 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:56.439 "adrfam": "ipv4", 00:14:56.439 "trsvcid": "$NVMF_PORT", 00:14:56.439 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:56.439 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:56.439 "hdgst": ${hdgst:-false}, 00:14:56.439 "ddgst": ${ddgst:-false} 00:14:56.439 }, 00:14:56.439 "method": "bdev_nvme_attach_controller" 00:14:56.439 } 00:14:56.439 EOF 00:14:56.439 )") 00:14:56.439 02:12:10 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=72471 00:14:56.439 02:12:10 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:14:56.439 02:12:10 -- nvmf/common.sh@520 -- # config=() 00:14:56.439 02:12:10 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:14:56.439 02:12:10 -- nvmf/common.sh@520 -- # local subsystem config 00:14:56.439 02:12:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:56.439 02:12:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:56.439 { 00:14:56.439 "params": { 00:14:56.439 "name": "Nvme$subsystem", 00:14:56.439 "trtype": "$TEST_TRANSPORT", 00:14:56.439 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:56.439 "adrfam": "ipv4", 00:14:56.439 "trsvcid": "$NVMF_PORT", 00:14:56.439 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:56.439 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:56.439 "hdgst": ${hdgst:-false}, 00:14:56.439 "ddgst": ${ddgst:-false} 00:14:56.439 }, 00:14:56.439 "method": "bdev_nvme_attach_controller" 00:14:56.439 } 00:14:56.439 EOF 00:14:56.439 )") 00:14:56.439 02:12:10 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=72474 00:14:56.439 02:12:10 -- target/bdev_io_wait.sh@35 -- # sync 00:14:56.439 02:12:10 -- nvmf/common.sh@542 -- # cat 00:14:56.439 02:12:10 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:14:56.439 02:12:10 -- nvmf/common.sh@520 -- # config=() 00:14:56.439 02:12:10 -- nvmf/common.sh@520 -- # local subsystem config 00:14:56.439 02:12:10 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:14:56.439 02:12:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:56.439 02:12:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:56.439 { 00:14:56.439 "params": { 00:14:56.439 "name": "Nvme$subsystem", 00:14:56.439 "trtype": "$TEST_TRANSPORT", 00:14:56.439 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:56.439 "adrfam": "ipv4", 00:14:56.439 "trsvcid": "$NVMF_PORT", 00:14:56.439 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:56.439 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:56.439 "hdgst": ${hdgst:-false}, 00:14:56.439 "ddgst": ${ddgst:-false} 00:14:56.439 }, 00:14:56.439 "method": "bdev_nvme_attach_controller" 00:14:56.439 } 00:14:56.439 EOF 00:14:56.439 )") 00:14:56.439 02:12:10 -- nvmf/common.sh@542 -- # cat 00:14:56.439 02:12:10 -- nvmf/common.sh@542 -- # cat 00:14:56.439 02:12:10 -- nvmf/common.sh@544 -- # jq . 00:14:56.439 02:12:10 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:14:56.439 02:12:10 -- nvmf/common.sh@520 -- # config=() 00:14:56.439 02:12:10 -- nvmf/common.sh@520 -- # local subsystem config 00:14:56.439 02:12:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:56.439 02:12:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:56.440 { 00:14:56.440 "params": { 00:14:56.440 "name": "Nvme$subsystem", 00:14:56.440 "trtype": "$TEST_TRANSPORT", 00:14:56.440 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:56.440 "adrfam": "ipv4", 00:14:56.440 "trsvcid": "$NVMF_PORT", 00:14:56.440 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:56.440 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:56.440 "hdgst": ${hdgst:-false}, 00:14:56.440 "ddgst": ${ddgst:-false} 00:14:56.440 }, 00:14:56.440 "method": "bdev_nvme_attach_controller" 00:14:56.440 } 00:14:56.440 EOF 00:14:56.440 )") 00:14:56.440 02:12:10 -- nvmf/common.sh@545 -- # IFS=, 00:14:56.440 02:12:10 -- nvmf/common.sh@544 -- # jq . 00:14:56.440 02:12:10 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:56.440 "params": { 00:14:56.440 "name": "Nvme1", 00:14:56.440 "trtype": "tcp", 00:14:56.440 "traddr": "10.0.0.2", 00:14:56.440 "adrfam": "ipv4", 00:14:56.440 "trsvcid": "4420", 00:14:56.440 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:56.440 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:56.440 "hdgst": false, 00:14:56.440 "ddgst": false 00:14:56.440 }, 00:14:56.440 "method": "bdev_nvme_attach_controller" 00:14:56.440 }' 00:14:56.440 02:12:10 -- nvmf/common.sh@542 -- # cat 00:14:56.440 02:12:10 -- nvmf/common.sh@545 -- # IFS=, 00:14:56.440 02:12:10 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:56.440 "params": { 00:14:56.440 "name": "Nvme1", 00:14:56.440 "trtype": "tcp", 00:14:56.440 "traddr": "10.0.0.2", 00:14:56.440 "adrfam": "ipv4", 00:14:56.440 "trsvcid": "4420", 00:14:56.440 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:56.440 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:56.440 "hdgst": false, 00:14:56.440 "ddgst": false 00:14:56.440 }, 00:14:56.440 "method": "bdev_nvme_attach_controller" 00:14:56.440 }' 00:14:56.440 02:12:10 -- nvmf/common.sh@544 -- # jq . 00:14:56.440 02:12:10 -- nvmf/common.sh@545 -- # IFS=, 00:14:56.440 02:12:10 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:56.440 "params": { 00:14:56.440 "name": "Nvme1", 00:14:56.440 "trtype": "tcp", 00:14:56.440 "traddr": "10.0.0.2", 00:14:56.440 "adrfam": "ipv4", 00:14:56.440 "trsvcid": "4420", 00:14:56.440 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:56.440 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:56.440 "hdgst": false, 00:14:56.440 "ddgst": false 00:14:56.440 }, 00:14:56.440 "method": "bdev_nvme_attach_controller" 00:14:56.440 }' 00:14:56.440 02:12:10 -- nvmf/common.sh@544 -- # jq . 00:14:56.440 02:12:10 -- nvmf/common.sh@545 -- # IFS=, 00:14:56.440 02:12:10 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:56.440 "params": { 00:14:56.440 "name": "Nvme1", 00:14:56.440 "trtype": "tcp", 00:14:56.440 "traddr": "10.0.0.2", 00:14:56.440 "adrfam": "ipv4", 00:14:56.440 "trsvcid": "4420", 00:14:56.440 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:56.440 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:56.440 "hdgst": false, 00:14:56.440 "ddgst": false 00:14:56.440 }, 00:14:56.440 "method": "bdev_nvme_attach_controller" 00:14:56.440 }' 00:14:56.440 [2024-05-14 02:12:10.911648] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:14:56.440 [2024-05-14 02:12:10.911736] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:14:56.440 [2024-05-14 02:12:10.924301] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:14:56.440 [2024-05-14 02:12:10.924381] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:14:56.440 [2024-05-14 02:12:10.933056] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:14:56.440 [2024-05-14 02:12:10.933119] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:14:56.440 [2024-05-14 02:12:10.938858] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:14:56.440 [2024-05-14 02:12:10.938935] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:14:56.697 [2024-05-14 02:12:11.088381] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:56.698 [2024-05-14 02:12:11.127914] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:56.698 [2024-05-14 02:12:11.148858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:14:56.698 02:12:11 -- target/bdev_io_wait.sh@37 -- # wait 72467 00:14:56.698 [2024-05-14 02:12:11.167497] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:56.698 [2024-05-14 02:12:11.184052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:14:56.698 [2024-05-14 02:12:11.215416] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:56.698 [2024-05-14 02:12:11.216368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:14:56.698 [2024-05-14 02:12:11.274188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:14:56.955 Running I/O for 1 seconds... 00:14:56.955 Running I/O for 1 seconds... 00:14:56.955 Running I/O for 1 seconds... 00:14:56.955 Running I/O for 1 seconds... 00:14:57.889 00:14:57.889 Latency(us) 00:14:57.889 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:57.889 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:14:57.889 Nvme1n1 : 1.00 181317.82 708.27 0.00 0.00 703.19 273.69 1057.51 00:14:57.889 =================================================================================================================== 00:14:57.889 Total : 181317.82 708.27 0.00 0.00 703.19 273.69 1057.51 00:14:57.889 00:14:57.889 Latency(us) 00:14:57.889 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:57.889 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:14:57.890 Nvme1n1 : 1.01 10573.78 41.30 0.00 0.00 12063.46 5779.08 18469.24 00:14:57.890 =================================================================================================================== 00:14:57.890 Total : 10573.78 41.30 0.00 0.00 12063.46 5779.08 18469.24 00:14:57.890 00:14:57.890 Latency(us) 00:14:57.890 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:57.890 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:14:57.890 Nvme1n1 : 1.01 7252.77 28.33 0.00 0.00 17564.04 8340.95 28716.68 00:14:57.890 =================================================================================================================== 00:14:57.890 Total : 7252.77 28.33 0.00 0.00 17564.04 8340.95 28716.68 00:14:57.890 00:14:57.890 Latency(us) 00:14:57.890 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:57.890 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:14:57.890 Nvme1n1 : 1.01 7470.88 29.18 0.00 0.00 17043.77 9592.09 27048.49 00:14:57.890 =================================================================================================================== 00:14:57.890 Total : 7470.88 29.18 0.00 0.00 17043.77 9592.09 27048.49 00:14:58.148 02:12:12 -- target/bdev_io_wait.sh@38 -- # wait 72469 00:14:58.148 02:12:12 -- target/bdev_io_wait.sh@39 -- # wait 72471 00:14:58.148 02:12:12 -- target/bdev_io_wait.sh@40 -- # wait 72474 00:14:58.148 02:12:12 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:58.148 02:12:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:58.148 02:12:12 -- common/autotest_common.sh@10 -- # set +x 00:14:58.148 02:12:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:58.148 02:12:12 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:14:58.148 02:12:12 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:14:58.148 02:12:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:58.148 02:12:12 -- nvmf/common.sh@116 -- # sync 00:14:58.148 02:12:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:58.148 02:12:12 -- nvmf/common.sh@119 -- # set +e 00:14:58.148 02:12:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:58.148 02:12:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:58.148 rmmod nvme_tcp 00:14:58.148 rmmod nvme_fabrics 00:14:58.148 rmmod nvme_keyring 00:14:58.406 02:12:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:58.406 02:12:12 -- nvmf/common.sh@123 -- # set -e 00:14:58.406 02:12:12 -- nvmf/common.sh@124 -- # return 0 00:14:58.406 02:12:12 -- nvmf/common.sh@477 -- # '[' -n 72433 ']' 00:14:58.406 02:12:12 -- nvmf/common.sh@478 -- # killprocess 72433 00:14:58.406 02:12:12 -- common/autotest_common.sh@926 -- # '[' -z 72433 ']' 00:14:58.406 02:12:12 -- common/autotest_common.sh@930 -- # kill -0 72433 00:14:58.406 02:12:12 -- common/autotest_common.sh@931 -- # uname 00:14:58.406 02:12:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:58.406 02:12:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 72433 00:14:58.406 02:12:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:58.406 02:12:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:58.406 killing process with pid 72433 00:14:58.406 02:12:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 72433' 00:14:58.406 02:12:12 -- common/autotest_common.sh@945 -- # kill 72433 00:14:58.406 02:12:12 -- common/autotest_common.sh@950 -- # wait 72433 00:14:58.406 02:12:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:58.406 02:12:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:58.406 02:12:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:58.406 02:12:12 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:58.406 02:12:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:58.406 02:12:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:58.406 02:12:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:58.406 02:12:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:58.406 02:12:12 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:58.406 00:14:58.406 real 0m3.055s 00:14:58.406 user 0m13.388s 00:14:58.406 sys 0m1.838s 00:14:58.406 02:12:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:58.406 02:12:12 -- common/autotest_common.sh@10 -- # set +x 00:14:58.406 ************************************ 00:14:58.406 END TEST nvmf_bdev_io_wait 00:14:58.406 ************************************ 00:14:58.665 02:12:13 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:58.665 02:12:13 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:58.665 02:12:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:58.665 02:12:13 -- common/autotest_common.sh@10 -- # set +x 00:14:58.665 ************************************ 00:14:58.665 START TEST nvmf_queue_depth 00:14:58.665 ************************************ 00:14:58.665 02:12:13 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:58.665 * Looking for test storage... 00:14:58.665 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:58.665 02:12:13 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:58.665 02:12:13 -- nvmf/common.sh@7 -- # uname -s 00:14:58.665 02:12:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:58.665 02:12:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:58.665 02:12:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:58.665 02:12:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:58.665 02:12:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:58.665 02:12:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:58.665 02:12:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:58.665 02:12:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:58.665 02:12:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:58.665 02:12:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:58.665 02:12:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:14:58.665 02:12:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:14:58.665 02:12:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:58.665 02:12:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:58.665 02:12:13 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:58.665 02:12:13 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:58.665 02:12:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:58.665 02:12:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:58.665 02:12:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:58.665 02:12:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.665 02:12:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.665 02:12:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.665 02:12:13 -- paths/export.sh@5 -- # export PATH 00:14:58.665 02:12:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.665 02:12:13 -- nvmf/common.sh@46 -- # : 0 00:14:58.665 02:12:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:58.665 02:12:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:58.666 02:12:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:58.666 02:12:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:58.666 02:12:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:58.666 02:12:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:58.666 02:12:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:58.666 02:12:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:58.666 02:12:13 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:14:58.666 02:12:13 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:14:58.666 02:12:13 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:58.666 02:12:13 -- target/queue_depth.sh@19 -- # nvmftestinit 00:14:58.666 02:12:13 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:58.666 02:12:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:58.666 02:12:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:58.666 02:12:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:58.666 02:12:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:58.666 02:12:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:58.666 02:12:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:58.666 02:12:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:58.666 02:12:13 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:58.666 02:12:13 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:58.666 02:12:13 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:58.666 02:12:13 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:58.666 02:12:13 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:58.666 02:12:13 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:58.666 02:12:13 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:58.666 02:12:13 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:58.666 02:12:13 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:58.666 02:12:13 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:58.666 02:12:13 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:58.666 02:12:13 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:58.666 02:12:13 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:58.666 02:12:13 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:58.666 02:12:13 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:58.666 02:12:13 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:58.666 02:12:13 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:58.666 02:12:13 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:58.666 02:12:13 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:58.666 02:12:13 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:58.666 Cannot find device "nvmf_tgt_br" 00:14:58.666 02:12:13 -- nvmf/common.sh@154 -- # true 00:14:58.666 02:12:13 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:58.666 Cannot find device "nvmf_tgt_br2" 00:14:58.666 02:12:13 -- nvmf/common.sh@155 -- # true 00:14:58.666 02:12:13 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:58.666 02:12:13 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:58.666 Cannot find device "nvmf_tgt_br" 00:14:58.666 02:12:13 -- nvmf/common.sh@157 -- # true 00:14:58.666 02:12:13 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:58.666 Cannot find device "nvmf_tgt_br2" 00:14:58.666 02:12:13 -- nvmf/common.sh@158 -- # true 00:14:58.666 02:12:13 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:58.666 02:12:13 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:58.666 02:12:13 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:58.666 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:58.666 02:12:13 -- nvmf/common.sh@161 -- # true 00:14:58.666 02:12:13 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:58.666 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:58.666 02:12:13 -- nvmf/common.sh@162 -- # true 00:14:58.666 02:12:13 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:58.666 02:12:13 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:58.666 02:12:13 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:58.666 02:12:13 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:58.941 02:12:13 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:58.941 02:12:13 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:58.941 02:12:13 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:58.941 02:12:13 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:58.941 02:12:13 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:58.941 02:12:13 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:58.941 02:12:13 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:58.941 02:12:13 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:58.941 02:12:13 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:58.941 02:12:13 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:58.941 02:12:13 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:58.941 02:12:13 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:58.941 02:12:13 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:58.942 02:12:13 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:58.942 02:12:13 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:58.942 02:12:13 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:58.942 02:12:13 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:58.942 02:12:13 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:58.942 02:12:13 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:58.942 02:12:13 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:58.942 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:58.942 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:14:58.942 00:14:58.942 --- 10.0.0.2 ping statistics --- 00:14:58.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:58.942 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:14:58.942 02:12:13 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:58.942 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:58.942 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:14:58.942 00:14:58.942 --- 10.0.0.3 ping statistics --- 00:14:58.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:58.942 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:14:58.942 02:12:13 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:58.942 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:58.942 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:14:58.942 00:14:58.942 --- 10.0.0.1 ping statistics --- 00:14:58.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:58.942 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:14:58.942 02:12:13 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:58.942 02:12:13 -- nvmf/common.sh@421 -- # return 0 00:14:58.942 02:12:13 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:58.942 02:12:13 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:58.942 02:12:13 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:58.942 02:12:13 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:58.942 02:12:13 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:58.942 02:12:13 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:58.942 02:12:13 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:58.942 02:12:13 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:14:58.942 02:12:13 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:58.942 02:12:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:58.942 02:12:13 -- common/autotest_common.sh@10 -- # set +x 00:14:58.942 02:12:13 -- nvmf/common.sh@469 -- # nvmfpid=72677 00:14:58.942 02:12:13 -- nvmf/common.sh@470 -- # waitforlisten 72677 00:14:58.942 02:12:13 -- common/autotest_common.sh@819 -- # '[' -z 72677 ']' 00:14:58.942 02:12:13 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:58.942 02:12:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:58.942 02:12:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:58.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:58.942 02:12:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:58.942 02:12:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:58.942 02:12:13 -- common/autotest_common.sh@10 -- # set +x 00:14:58.942 [2024-05-14 02:12:13.503234] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:14:58.942 [2024-05-14 02:12:13.503316] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:59.203 [2024-05-14 02:12:13.636067] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.203 [2024-05-14 02:12:13.692130] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:59.203 [2024-05-14 02:12:13.692270] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:59.203 [2024-05-14 02:12:13.692283] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:59.203 [2024-05-14 02:12:13.692292] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:59.203 [2024-05-14 02:12:13.692321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:00.138 02:12:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:00.138 02:12:14 -- common/autotest_common.sh@852 -- # return 0 00:15:00.138 02:12:14 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:00.138 02:12:14 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:00.138 02:12:14 -- common/autotest_common.sh@10 -- # set +x 00:15:00.138 02:12:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:00.138 02:12:14 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:00.138 02:12:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:00.138 02:12:14 -- common/autotest_common.sh@10 -- # set +x 00:15:00.138 [2024-05-14 02:12:14.486917] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:00.138 02:12:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:00.138 02:12:14 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:00.138 02:12:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:00.138 02:12:14 -- common/autotest_common.sh@10 -- # set +x 00:15:00.138 Malloc0 00:15:00.138 02:12:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:00.138 02:12:14 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:00.138 02:12:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:00.138 02:12:14 -- common/autotest_common.sh@10 -- # set +x 00:15:00.138 02:12:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:00.138 02:12:14 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:00.138 02:12:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:00.138 02:12:14 -- common/autotest_common.sh@10 -- # set +x 00:15:00.138 02:12:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:00.138 02:12:14 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:00.138 02:12:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:00.138 02:12:14 -- common/autotest_common.sh@10 -- # set +x 00:15:00.138 [2024-05-14 02:12:14.544259] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:00.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:00.138 02:12:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:00.138 02:12:14 -- target/queue_depth.sh@30 -- # bdevperf_pid=72727 00:15:00.138 02:12:14 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:15:00.138 02:12:14 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:00.138 02:12:14 -- target/queue_depth.sh@33 -- # waitforlisten 72727 /var/tmp/bdevperf.sock 00:15:00.138 02:12:14 -- common/autotest_common.sh@819 -- # '[' -z 72727 ']' 00:15:00.138 02:12:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:00.138 02:12:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:00.138 02:12:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:00.138 02:12:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:00.138 02:12:14 -- common/autotest_common.sh@10 -- # set +x 00:15:00.138 [2024-05-14 02:12:14.609851] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:15:00.138 [2024-05-14 02:12:14.609931] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72727 ] 00:15:00.397 [2024-05-14 02:12:14.747396] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.397 [2024-05-14 02:12:14.806169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:01.332 02:12:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:01.332 02:12:15 -- common/autotest_common.sh@852 -- # return 0 00:15:01.332 02:12:15 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:01.332 02:12:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:01.332 02:12:15 -- common/autotest_common.sh@10 -- # set +x 00:15:01.332 NVMe0n1 00:15:01.332 02:12:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:01.332 02:12:15 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:01.332 Running I/O for 10 seconds... 00:15:11.305 00:15:11.305 Latency(us) 00:15:11.305 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:11.305 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:15:11.305 Verification LBA range: start 0x0 length 0x4000 00:15:11.305 NVMe0n1 : 10.07 13464.14 52.59 0.00 0.00 75761.99 14298.76 58386.62 00:15:11.305 =================================================================================================================== 00:15:11.305 Total : 13464.14 52.59 0.00 0.00 75761.99 14298.76 58386.62 00:15:11.305 0 00:15:11.305 02:12:25 -- target/queue_depth.sh@39 -- # killprocess 72727 00:15:11.305 02:12:25 -- common/autotest_common.sh@926 -- # '[' -z 72727 ']' 00:15:11.305 02:12:25 -- common/autotest_common.sh@930 -- # kill -0 72727 00:15:11.305 02:12:25 -- common/autotest_common.sh@931 -- # uname 00:15:11.305 02:12:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:11.305 02:12:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 72727 00:15:11.305 killing process with pid 72727 00:15:11.305 Received shutdown signal, test time was about 10.000000 seconds 00:15:11.305 00:15:11.305 Latency(us) 00:15:11.305 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:11.305 =================================================================================================================== 00:15:11.305 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:11.305 02:12:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:11.305 02:12:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:11.305 02:12:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 72727' 00:15:11.305 02:12:25 -- common/autotest_common.sh@945 -- # kill 72727 00:15:11.305 02:12:25 -- common/autotest_common.sh@950 -- # wait 72727 00:15:11.563 02:12:26 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:11.563 02:12:26 -- target/queue_depth.sh@43 -- # nvmftestfini 00:15:11.563 02:12:26 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:11.563 02:12:26 -- nvmf/common.sh@116 -- # sync 00:15:11.563 02:12:26 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:11.563 02:12:26 -- nvmf/common.sh@119 -- # set +e 00:15:11.563 02:12:26 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:11.563 02:12:26 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:11.563 rmmod nvme_tcp 00:15:11.563 rmmod nvme_fabrics 00:15:11.563 rmmod nvme_keyring 00:15:11.563 02:12:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:11.563 02:12:26 -- nvmf/common.sh@123 -- # set -e 00:15:11.563 02:12:26 -- nvmf/common.sh@124 -- # return 0 00:15:11.563 02:12:26 -- nvmf/common.sh@477 -- # '[' -n 72677 ']' 00:15:11.563 02:12:26 -- nvmf/common.sh@478 -- # killprocess 72677 00:15:11.563 02:12:26 -- common/autotest_common.sh@926 -- # '[' -z 72677 ']' 00:15:11.563 02:12:26 -- common/autotest_common.sh@930 -- # kill -0 72677 00:15:11.823 02:12:26 -- common/autotest_common.sh@931 -- # uname 00:15:11.823 02:12:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:11.823 02:12:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 72677 00:15:11.823 killing process with pid 72677 00:15:11.823 02:12:26 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:15:11.823 02:12:26 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:15:11.823 02:12:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 72677' 00:15:11.823 02:12:26 -- common/autotest_common.sh@945 -- # kill 72677 00:15:11.823 02:12:26 -- common/autotest_common.sh@950 -- # wait 72677 00:15:11.823 02:12:26 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:11.823 02:12:26 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:11.823 02:12:26 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:11.823 02:12:26 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:11.823 02:12:26 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:11.823 02:12:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:11.823 02:12:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:11.823 02:12:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:11.823 02:12:26 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:11.823 00:15:11.823 real 0m13.397s 00:15:11.823 user 0m23.046s 00:15:11.823 sys 0m2.052s 00:15:11.823 02:12:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:11.823 ************************************ 00:15:11.823 END TEST nvmf_queue_depth 00:15:11.823 02:12:26 -- common/autotest_common.sh@10 -- # set +x 00:15:11.823 ************************************ 00:15:12.082 02:12:26 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:12.082 02:12:26 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:12.082 02:12:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:12.082 02:12:26 -- common/autotest_common.sh@10 -- # set +x 00:15:12.082 ************************************ 00:15:12.082 START TEST nvmf_multipath 00:15:12.082 ************************************ 00:15:12.082 02:12:26 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:12.082 * Looking for test storage... 00:15:12.082 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:12.082 02:12:26 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:12.082 02:12:26 -- nvmf/common.sh@7 -- # uname -s 00:15:12.082 02:12:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:12.082 02:12:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:12.082 02:12:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:12.082 02:12:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:12.082 02:12:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:12.082 02:12:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:12.082 02:12:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:12.082 02:12:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:12.082 02:12:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:12.082 02:12:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:12.082 02:12:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:15:12.082 02:12:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:15:12.082 02:12:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:12.082 02:12:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:12.082 02:12:26 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:12.082 02:12:26 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:12.082 02:12:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:12.082 02:12:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:12.082 02:12:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:12.082 02:12:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.082 02:12:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.082 02:12:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.082 02:12:26 -- paths/export.sh@5 -- # export PATH 00:15:12.082 02:12:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.082 02:12:26 -- nvmf/common.sh@46 -- # : 0 00:15:12.082 02:12:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:12.082 02:12:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:12.082 02:12:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:12.082 02:12:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:12.082 02:12:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:12.082 02:12:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:12.082 02:12:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:12.082 02:12:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:12.083 02:12:26 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:12.083 02:12:26 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:12.083 02:12:26 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:12.083 02:12:26 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:12.083 02:12:26 -- target/multipath.sh@43 -- # nvmftestinit 00:15:12.083 02:12:26 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:12.083 02:12:26 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:12.083 02:12:26 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:12.083 02:12:26 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:12.083 02:12:26 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:12.083 02:12:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:12.083 02:12:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:12.083 02:12:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:12.083 02:12:26 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:12.083 02:12:26 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:12.083 02:12:26 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:12.083 02:12:26 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:12.083 02:12:26 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:12.083 02:12:26 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:12.083 02:12:26 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:12.083 02:12:26 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:12.083 02:12:26 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:12.083 02:12:26 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:12.083 02:12:26 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:12.083 02:12:26 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:12.083 02:12:26 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:12.083 02:12:26 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:12.083 02:12:26 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:12.083 02:12:26 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:12.083 02:12:26 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:12.083 02:12:26 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:12.083 02:12:26 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:12.083 02:12:26 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:12.083 Cannot find device "nvmf_tgt_br" 00:15:12.083 02:12:26 -- nvmf/common.sh@154 -- # true 00:15:12.083 02:12:26 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:12.083 Cannot find device "nvmf_tgt_br2" 00:15:12.083 02:12:26 -- nvmf/common.sh@155 -- # true 00:15:12.083 02:12:26 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:12.083 02:12:26 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:12.083 Cannot find device "nvmf_tgt_br" 00:15:12.083 02:12:26 -- nvmf/common.sh@157 -- # true 00:15:12.083 02:12:26 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:12.083 Cannot find device "nvmf_tgt_br2" 00:15:12.083 02:12:26 -- nvmf/common.sh@158 -- # true 00:15:12.083 02:12:26 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:12.083 02:12:26 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:12.340 02:12:26 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:12.340 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:12.340 02:12:26 -- nvmf/common.sh@161 -- # true 00:15:12.340 02:12:26 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:12.340 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:12.340 02:12:26 -- nvmf/common.sh@162 -- # true 00:15:12.340 02:12:26 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:12.340 02:12:26 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:12.340 02:12:26 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:12.340 02:12:26 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:12.340 02:12:26 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:12.340 02:12:26 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:12.340 02:12:26 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:12.340 02:12:26 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:12.340 02:12:26 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:12.340 02:12:26 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:12.340 02:12:26 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:12.340 02:12:26 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:12.340 02:12:26 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:12.340 02:12:26 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:12.340 02:12:26 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:12.340 02:12:26 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:12.340 02:12:26 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:12.340 02:12:26 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:12.340 02:12:26 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:12.340 02:12:26 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:12.340 02:12:26 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:12.340 02:12:26 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:12.340 02:12:26 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:12.340 02:12:26 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:12.340 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:12.340 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:15:12.340 00:15:12.340 --- 10.0.0.2 ping statistics --- 00:15:12.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.341 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:15:12.341 02:12:26 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:12.341 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:12.341 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:15:12.341 00:15:12.341 --- 10.0.0.3 ping statistics --- 00:15:12.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.341 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:15:12.341 02:12:26 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:12.341 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:12.341 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:15:12.341 00:15:12.341 --- 10.0.0.1 ping statistics --- 00:15:12.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.341 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:15:12.341 02:12:26 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:12.341 02:12:26 -- nvmf/common.sh@421 -- # return 0 00:15:12.341 02:12:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:12.341 02:12:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:12.341 02:12:26 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:12.341 02:12:26 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:12.341 02:12:26 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:12.341 02:12:26 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:12.341 02:12:26 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:12.341 02:12:26 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:15:12.341 02:12:26 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:15:12.341 02:12:26 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:15:12.341 02:12:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:12.341 02:12:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:12.341 02:12:26 -- common/autotest_common.sh@10 -- # set +x 00:15:12.341 02:12:26 -- nvmf/common.sh@469 -- # nvmfpid=73055 00:15:12.341 02:12:26 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:12.341 02:12:26 -- nvmf/common.sh@470 -- # waitforlisten 73055 00:15:12.341 02:12:26 -- common/autotest_common.sh@819 -- # '[' -z 73055 ']' 00:15:12.341 02:12:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:12.341 02:12:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:12.341 02:12:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:12.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:12.341 02:12:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:12.341 02:12:26 -- common/autotest_common.sh@10 -- # set +x 00:15:12.598 [2024-05-14 02:12:26.998556] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:15:12.598 [2024-05-14 02:12:26.998701] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:12.598 [2024-05-14 02:12:27.135741] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:12.856 [2024-05-14 02:12:27.195432] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:12.856 [2024-05-14 02:12:27.195577] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:12.856 [2024-05-14 02:12:27.195591] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:12.856 [2024-05-14 02:12:27.195600] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:12.856 [2024-05-14 02:12:27.195683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:12.856 [2024-05-14 02:12:27.196098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:12.856 [2024-05-14 02:12:27.196176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:12.856 [2024-05-14 02:12:27.196186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:13.423 02:12:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:13.423 02:12:27 -- common/autotest_common.sh@852 -- # return 0 00:15:13.423 02:12:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:13.423 02:12:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:13.423 02:12:27 -- common/autotest_common.sh@10 -- # set +x 00:15:13.423 02:12:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:13.423 02:12:27 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:13.680 [2024-05-14 02:12:28.191685] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:13.680 02:12:28 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:13.938 Malloc0 00:15:14.197 02:12:28 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:15:14.454 02:12:28 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:14.712 02:12:29 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:14.970 [2024-05-14 02:12:29.346103] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:14.970 02:12:29 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:15.227 [2024-05-14 02:12:29.590363] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:15.227 02:12:29 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 --hostid=01bebc16-ee64-4b1b-82ac-462e1640a9a9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:15:15.485 02:12:29 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 --hostid=01bebc16-ee64-4b1b-82ac-462e1640a9a9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:15:15.485 02:12:30 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:15:15.485 02:12:30 -- common/autotest_common.sh@1177 -- # local i=0 00:15:15.485 02:12:30 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:15:15.485 02:12:30 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:15:15.485 02:12:30 -- common/autotest_common.sh@1184 -- # sleep 2 00:15:18.016 02:12:32 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:15:18.016 02:12:32 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:15:18.016 02:12:32 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:15:18.016 02:12:32 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:15:18.016 02:12:32 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:15:18.016 02:12:32 -- common/autotest_common.sh@1187 -- # return 0 00:15:18.016 02:12:32 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:15:18.016 02:12:32 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:15:18.016 02:12:32 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:15:18.016 02:12:32 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:15:18.016 02:12:32 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:15:18.016 02:12:32 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:15:18.016 02:12:32 -- target/multipath.sh@38 -- # return 0 00:15:18.016 02:12:32 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:15:18.016 02:12:32 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:15:18.016 02:12:32 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:15:18.016 02:12:32 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:15:18.016 02:12:32 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:15:18.016 02:12:32 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:15:18.016 02:12:32 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:15:18.016 02:12:32 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:15:18.016 02:12:32 -- target/multipath.sh@22 -- # local timeout=20 00:15:18.016 02:12:32 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:18.016 02:12:32 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:18.016 02:12:32 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:18.016 02:12:32 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:15:18.016 02:12:32 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:15:18.016 02:12:32 -- target/multipath.sh@22 -- # local timeout=20 00:15:18.016 02:12:32 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:18.016 02:12:32 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:18.016 02:12:32 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:18.016 02:12:32 -- target/multipath.sh@85 -- # echo numa 00:15:18.016 02:12:32 -- target/multipath.sh@88 -- # fio_pid=73200 00:15:18.016 02:12:32 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:15:18.016 02:12:32 -- target/multipath.sh@90 -- # sleep 1 00:15:18.016 [global] 00:15:18.016 thread=1 00:15:18.016 invalidate=1 00:15:18.016 rw=randrw 00:15:18.016 time_based=1 00:15:18.016 runtime=6 00:15:18.016 ioengine=libaio 00:15:18.016 direct=1 00:15:18.016 bs=4096 00:15:18.016 iodepth=128 00:15:18.016 norandommap=0 00:15:18.016 numjobs=1 00:15:18.016 00:15:18.016 verify_dump=1 00:15:18.016 verify_backlog=512 00:15:18.016 verify_state_save=0 00:15:18.016 do_verify=1 00:15:18.016 verify=crc32c-intel 00:15:18.016 [job0] 00:15:18.016 filename=/dev/nvme0n1 00:15:18.016 Could not set queue depth (nvme0n1) 00:15:18.016 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:18.016 fio-3.35 00:15:18.016 Starting 1 thread 00:15:18.583 02:12:33 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:18.842 02:12:33 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:19.110 02:12:33 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:15:19.110 02:12:33 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:15:19.110 02:12:33 -- target/multipath.sh@22 -- # local timeout=20 00:15:19.110 02:12:33 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:19.110 02:12:33 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:19.110 02:12:33 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:19.110 02:12:33 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:15:19.110 02:12:33 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:15:19.110 02:12:33 -- target/multipath.sh@22 -- # local timeout=20 00:15:19.110 02:12:33 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:19.111 02:12:33 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:19.111 02:12:33 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:19.111 02:12:33 -- target/multipath.sh@25 -- # sleep 1s 00:15:20.050 02:12:34 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:20.050 02:12:34 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:20.050 02:12:34 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:20.050 02:12:34 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:20.616 02:12:34 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:15:20.874 02:12:35 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:15:20.874 02:12:35 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:15:20.874 02:12:35 -- target/multipath.sh@22 -- # local timeout=20 00:15:20.874 02:12:35 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:20.874 02:12:35 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:20.874 02:12:35 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:20.874 02:12:35 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:15:20.874 02:12:35 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:15:20.874 02:12:35 -- target/multipath.sh@22 -- # local timeout=20 00:15:20.874 02:12:35 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:20.874 02:12:35 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:20.874 02:12:35 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:20.874 02:12:35 -- target/multipath.sh@25 -- # sleep 1s 00:15:21.810 02:12:36 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:21.810 02:12:36 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:21.810 02:12:36 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:21.810 02:12:36 -- target/multipath.sh@104 -- # wait 73200 00:15:24.339 00:15:24.339 job0: (groupid=0, jobs=1): err= 0: pid=73221: Tue May 14 02:12:38 2024 00:15:24.339 read: IOPS=11.2k, BW=43.8MiB/s (46.0MB/s)(263MiB/6006msec) 00:15:24.339 slat (usec): min=3, max=5130, avg=49.40, stdev=216.19 00:15:24.339 clat (usec): min=352, max=12965, avg=7706.94, stdev=1190.33 00:15:24.339 lat (usec): min=371, max=13130, avg=7756.34, stdev=1199.42 00:15:24.339 clat percentiles (usec): 00:15:24.339 | 1.00th=[ 4555], 5.00th=[ 5932], 10.00th=[ 6456], 20.00th=[ 6849], 00:15:24.339 | 30.00th=[ 7046], 40.00th=[ 7373], 50.00th=[ 7701], 60.00th=[ 7963], 00:15:24.339 | 70.00th=[ 8225], 80.00th=[ 8586], 90.00th=[ 9110], 95.00th=[ 9765], 00:15:24.339 | 99.00th=[11076], 99.50th=[11469], 99.90th=[12125], 99.95th=[12387], 00:15:24.339 | 99.99th=[12649] 00:15:24.339 bw ( KiB/s): min=11576, max=33256, per=53.46%, avg=23990.82, stdev=5831.29, samples=11 00:15:24.339 iops : min= 2894, max= 8314, avg=5997.64, stdev=1457.79, samples=11 00:15:24.339 write: IOPS=6727, BW=26.3MiB/s (27.6MB/s)(142MiB/5398msec); 0 zone resets 00:15:24.339 slat (usec): min=5, max=2091, avg=62.66, stdev=140.51 00:15:24.339 clat (usec): min=315, max=12825, avg=6671.29, stdev=1109.69 00:15:24.339 lat (usec): min=385, max=12856, avg=6733.95, stdev=1114.24 00:15:24.339 clat percentiles (usec): 00:15:24.339 | 1.00th=[ 3392], 5.00th=[ 4621], 10.00th=[ 5473], 20.00th=[ 6063], 00:15:24.339 | 30.00th=[ 6325], 40.00th=[ 6521], 50.00th=[ 6718], 60.00th=[ 6915], 00:15:24.339 | 70.00th=[ 7111], 80.00th=[ 7373], 90.00th=[ 7898], 95.00th=[ 8356], 00:15:24.339 | 99.00th=[ 9372], 99.50th=[10159], 99.90th=[11469], 99.95th=[11863], 00:15:24.339 | 99.99th=[12780] 00:15:24.339 bw ( KiB/s): min=12200, max=32720, per=89.20%, avg=24003.00, stdev=5490.34, samples=11 00:15:24.339 iops : min= 3050, max= 8180, avg=6000.73, stdev=1372.57, samples=11 00:15:24.339 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:15:24.339 lat (msec) : 2=0.08%, 4=1.04%, 10=96.02%, 20=2.84% 00:15:24.339 cpu : usr=6.43%, sys=25.93%, ctx=6539, majf=0, minf=133 00:15:24.339 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:15:24.339 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:24.339 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:24.339 issued rwts: total=67381,36313,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:24.339 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:24.339 00:15:24.339 Run status group 0 (all jobs): 00:15:24.339 READ: bw=43.8MiB/s (46.0MB/s), 43.8MiB/s-43.8MiB/s (46.0MB/s-46.0MB/s), io=263MiB (276MB), run=6006-6006msec 00:15:24.339 WRITE: bw=26.3MiB/s (27.6MB/s), 26.3MiB/s-26.3MiB/s (27.6MB/s-27.6MB/s), io=142MiB (149MB), run=5398-5398msec 00:15:24.339 00:15:24.339 Disk stats (read/write): 00:15:24.339 nvme0n1: ios=66599/35426, merge=0/0, ticks=478701/219133, in_queue=697834, util=98.62% 00:15:24.339 02:12:38 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:15:24.339 02:12:38 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:15:24.339 02:12:38 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:15:24.339 02:12:38 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:15:24.339 02:12:38 -- target/multipath.sh@22 -- # local timeout=20 00:15:24.339 02:12:38 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:24.339 02:12:38 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:24.339 02:12:38 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:24.339 02:12:38 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:15:24.339 02:12:38 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:15:24.339 02:12:38 -- target/multipath.sh@22 -- # local timeout=20 00:15:24.339 02:12:38 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:24.339 02:12:38 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:24.339 02:12:38 -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:15:24.339 02:12:38 -- target/multipath.sh@25 -- # sleep 1s 00:15:25.714 02:12:39 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:25.714 02:12:39 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:25.714 02:12:39 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:25.714 02:12:39 -- target/multipath.sh@113 -- # echo round-robin 00:15:25.714 02:12:39 -- target/multipath.sh@116 -- # fio_pid=73357 00:15:25.714 02:12:39 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:15:25.714 02:12:39 -- target/multipath.sh@118 -- # sleep 1 00:15:25.714 [global] 00:15:25.714 thread=1 00:15:25.714 invalidate=1 00:15:25.714 rw=randrw 00:15:25.714 time_based=1 00:15:25.714 runtime=6 00:15:25.714 ioengine=libaio 00:15:25.714 direct=1 00:15:25.714 bs=4096 00:15:25.714 iodepth=128 00:15:25.714 norandommap=0 00:15:25.714 numjobs=1 00:15:25.714 00:15:25.714 verify_dump=1 00:15:25.714 verify_backlog=512 00:15:25.714 verify_state_save=0 00:15:25.714 do_verify=1 00:15:25.714 verify=crc32c-intel 00:15:25.714 [job0] 00:15:25.714 filename=/dev/nvme0n1 00:15:25.714 Could not set queue depth (nvme0n1) 00:15:25.714 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:25.714 fio-3.35 00:15:25.714 Starting 1 thread 00:15:26.647 02:12:40 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:26.647 02:12:41 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:26.906 02:12:41 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:15:26.906 02:12:41 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:15:26.906 02:12:41 -- target/multipath.sh@22 -- # local timeout=20 00:15:26.906 02:12:41 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:26.906 02:12:41 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:26.906 02:12:41 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:26.906 02:12:41 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:15:26.906 02:12:41 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:15:26.906 02:12:41 -- target/multipath.sh@22 -- # local timeout=20 00:15:26.906 02:12:41 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:26.906 02:12:41 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:26.906 02:12:41 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:26.906 02:12:41 -- target/multipath.sh@25 -- # sleep 1s 00:15:28.290 02:12:42 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:28.290 02:12:42 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:28.290 02:12:42 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:28.290 02:12:42 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:28.290 02:12:42 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:15:28.549 02:12:43 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:15:28.549 02:12:43 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:15:28.549 02:12:43 -- target/multipath.sh@22 -- # local timeout=20 00:15:28.549 02:12:43 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:28.549 02:12:43 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:28.549 02:12:43 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:28.549 02:12:43 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:15:28.549 02:12:43 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:15:28.549 02:12:43 -- target/multipath.sh@22 -- # local timeout=20 00:15:28.549 02:12:43 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:28.549 02:12:43 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:28.549 02:12:43 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:28.549 02:12:43 -- target/multipath.sh@25 -- # sleep 1s 00:15:29.484 02:12:44 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:29.484 02:12:44 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:29.484 02:12:44 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:29.484 02:12:44 -- target/multipath.sh@132 -- # wait 73357 00:15:32.014 00:15:32.014 job0: (groupid=0, jobs=1): err= 0: pid=73379: Tue May 14 02:12:46 2024 00:15:32.014 read: IOPS=12.8k, BW=50.0MiB/s (52.4MB/s)(300MiB/6003msec) 00:15:32.014 slat (usec): min=4, max=5194, avg=40.58, stdev=197.06 00:15:32.014 clat (usec): min=231, max=15696, avg=7013.11, stdev=1778.74 00:15:32.014 lat (usec): min=255, max=15711, avg=7053.69, stdev=1791.23 00:15:32.014 clat percentiles (usec): 00:15:32.014 | 1.00th=[ 1467], 5.00th=[ 3785], 10.00th=[ 4752], 20.00th=[ 5866], 00:15:32.014 | 30.00th=[ 6652], 40.00th=[ 6915], 50.00th=[ 7111], 60.00th=[ 7373], 00:15:32.014 | 70.00th=[ 7767], 80.00th=[ 8225], 90.00th=[ 8717], 95.00th=[ 9503], 00:15:32.014 | 99.00th=[11600], 99.50th=[12125], 99.90th=[13960], 99.95th=[14615], 00:15:32.014 | 99.99th=[15664] 00:15:32.014 bw ( KiB/s): min= 3864, max=41544, per=51.91%, avg=26569.45, stdev=10879.86, samples=11 00:15:32.014 iops : min= 966, max=10386, avg=6642.36, stdev=2719.97, samples=11 00:15:32.014 write: IOPS=7676, BW=30.0MiB/s (31.4MB/s)(151MiB/5035msec); 0 zone resets 00:15:32.014 slat (usec): min=12, max=3261, avg=52.54, stdev=121.60 00:15:32.014 clat (usec): min=219, max=12990, avg=5772.76, stdev=1770.88 00:15:32.014 lat (usec): min=276, max=13050, avg=5825.30, stdev=1781.52 00:15:32.014 clat percentiles (usec): 00:15:32.014 | 1.00th=[ 1057], 5.00th=[ 2638], 10.00th=[ 3294], 20.00th=[ 4015], 00:15:32.014 | 30.00th=[ 4883], 40.00th=[ 5932], 50.00th=[ 6259], 60.00th=[ 6587], 00:15:32.014 | 70.00th=[ 6783], 80.00th=[ 7111], 90.00th=[ 7504], 95.00th=[ 7898], 00:15:32.015 | 99.00th=[ 9765], 99.50th=[10421], 99.90th=[11731], 99.95th=[12256], 00:15:32.015 | 99.99th=[12780] 00:15:32.015 bw ( KiB/s): min= 4136, max=40960, per=86.57%, avg=26583.27, stdev=10651.79, samples=11 00:15:32.015 iops : min= 1034, max=10240, avg=6645.82, stdev=2662.95, samples=11 00:15:32.015 lat (usec) : 250=0.01%, 500=0.05%, 750=0.14%, 1000=0.32% 00:15:32.015 lat (msec) : 2=1.75%, 4=8.11%, 10=86.78%, 20=2.83% 00:15:32.015 cpu : usr=6.15%, sys=27.14%, ctx=8031, majf=0, minf=145 00:15:32.015 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:15:32.015 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:32.015 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:32.015 issued rwts: total=76811,38651,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:32.015 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:32.015 00:15:32.015 Run status group 0 (all jobs): 00:15:32.015 READ: bw=50.0MiB/s (52.4MB/s), 50.0MiB/s-50.0MiB/s (52.4MB/s-52.4MB/s), io=300MiB (315MB), run=6003-6003msec 00:15:32.015 WRITE: bw=30.0MiB/s (31.4MB/s), 30.0MiB/s-30.0MiB/s (31.4MB/s-31.4MB/s), io=151MiB (158MB), run=5035-5035msec 00:15:32.015 00:15:32.015 Disk stats (read/write): 00:15:32.015 nvme0n1: ios=75160/38651, merge=0/0, ticks=487856/204601, in_queue=692457, util=98.63% 00:15:32.015 02:12:46 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:32.015 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:15:32.015 02:12:46 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:32.015 02:12:46 -- common/autotest_common.sh@1198 -- # local i=0 00:15:32.015 02:12:46 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:15:32.015 02:12:46 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:32.015 02:12:46 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:32.015 02:12:46 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:32.015 02:12:46 -- common/autotest_common.sh@1210 -- # return 0 00:15:32.015 02:12:46 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:32.015 02:12:46 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:15:32.015 02:12:46 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:15:32.015 02:12:46 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:15:32.015 02:12:46 -- target/multipath.sh@144 -- # nvmftestfini 00:15:32.015 02:12:46 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:32.015 02:12:46 -- nvmf/common.sh@116 -- # sync 00:15:32.015 02:12:46 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:32.015 02:12:46 -- nvmf/common.sh@119 -- # set +e 00:15:32.015 02:12:46 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:32.015 02:12:46 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:32.015 rmmod nvme_tcp 00:15:32.274 rmmod nvme_fabrics 00:15:32.274 rmmod nvme_keyring 00:15:32.274 02:12:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:32.274 02:12:46 -- nvmf/common.sh@123 -- # set -e 00:15:32.274 02:12:46 -- nvmf/common.sh@124 -- # return 0 00:15:32.274 02:12:46 -- nvmf/common.sh@477 -- # '[' -n 73055 ']' 00:15:32.274 02:12:46 -- nvmf/common.sh@478 -- # killprocess 73055 00:15:32.274 02:12:46 -- common/autotest_common.sh@926 -- # '[' -z 73055 ']' 00:15:32.274 02:12:46 -- common/autotest_common.sh@930 -- # kill -0 73055 00:15:32.274 02:12:46 -- common/autotest_common.sh@931 -- # uname 00:15:32.274 02:12:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:32.274 02:12:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73055 00:15:32.274 killing process with pid 73055 00:15:32.274 02:12:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:32.274 02:12:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:32.274 02:12:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73055' 00:15:32.274 02:12:46 -- common/autotest_common.sh@945 -- # kill 73055 00:15:32.274 02:12:46 -- common/autotest_common.sh@950 -- # wait 73055 00:15:32.532 02:12:46 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:32.532 02:12:46 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:32.532 02:12:46 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:32.532 02:12:46 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:32.532 02:12:46 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:32.532 02:12:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:32.532 02:12:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:32.532 02:12:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:32.532 02:12:46 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:32.532 00:15:32.532 real 0m20.447s 00:15:32.532 user 1m20.687s 00:15:32.532 sys 0m6.704s 00:15:32.532 02:12:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:32.532 02:12:46 -- common/autotest_common.sh@10 -- # set +x 00:15:32.532 ************************************ 00:15:32.532 END TEST nvmf_multipath 00:15:32.532 ************************************ 00:15:32.532 02:12:46 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:32.532 02:12:46 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:32.532 02:12:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:32.532 02:12:46 -- common/autotest_common.sh@10 -- # set +x 00:15:32.532 ************************************ 00:15:32.532 START TEST nvmf_zcopy 00:15:32.532 ************************************ 00:15:32.532 02:12:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:32.532 * Looking for test storage... 00:15:32.532 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:32.532 02:12:47 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:32.532 02:12:47 -- nvmf/common.sh@7 -- # uname -s 00:15:32.532 02:12:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:32.532 02:12:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:32.532 02:12:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:32.532 02:12:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:32.532 02:12:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:32.532 02:12:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:32.532 02:12:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:32.532 02:12:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:32.532 02:12:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:32.532 02:12:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:32.532 02:12:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:15:32.532 02:12:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:15:32.533 02:12:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:32.533 02:12:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:32.533 02:12:47 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:32.533 02:12:47 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:32.533 02:12:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:32.533 02:12:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:32.533 02:12:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:32.533 02:12:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.533 02:12:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.533 02:12:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.533 02:12:47 -- paths/export.sh@5 -- # export PATH 00:15:32.533 02:12:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.533 02:12:47 -- nvmf/common.sh@46 -- # : 0 00:15:32.533 02:12:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:32.533 02:12:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:32.533 02:12:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:32.533 02:12:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:32.533 02:12:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:32.533 02:12:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:32.533 02:12:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:32.533 02:12:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:32.533 02:12:47 -- target/zcopy.sh@12 -- # nvmftestinit 00:15:32.533 02:12:47 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:32.533 02:12:47 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:32.533 02:12:47 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:32.533 02:12:47 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:32.533 02:12:47 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:32.533 02:12:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:32.533 02:12:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:32.533 02:12:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:32.533 02:12:47 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:32.533 02:12:47 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:32.533 02:12:47 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:32.533 02:12:47 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:32.533 02:12:47 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:32.533 02:12:47 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:32.533 02:12:47 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:32.533 02:12:47 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:32.533 02:12:47 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:32.533 02:12:47 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:32.533 02:12:47 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:32.533 02:12:47 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:32.533 02:12:47 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:32.533 02:12:47 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:32.533 02:12:47 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:32.533 02:12:47 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:32.533 02:12:47 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:32.533 02:12:47 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:32.533 02:12:47 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:32.533 02:12:47 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:32.533 Cannot find device "nvmf_tgt_br" 00:15:32.533 02:12:47 -- nvmf/common.sh@154 -- # true 00:15:32.533 02:12:47 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:32.533 Cannot find device "nvmf_tgt_br2" 00:15:32.533 02:12:47 -- nvmf/common.sh@155 -- # true 00:15:32.533 02:12:47 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:32.533 02:12:47 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:32.533 Cannot find device "nvmf_tgt_br" 00:15:32.533 02:12:47 -- nvmf/common.sh@157 -- # true 00:15:32.533 02:12:47 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:32.533 Cannot find device "nvmf_tgt_br2" 00:15:32.533 02:12:47 -- nvmf/common.sh@158 -- # true 00:15:32.533 02:12:47 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:32.791 02:12:47 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:32.791 02:12:47 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:32.791 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:32.791 02:12:47 -- nvmf/common.sh@161 -- # true 00:15:32.791 02:12:47 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:32.791 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:32.791 02:12:47 -- nvmf/common.sh@162 -- # true 00:15:32.791 02:12:47 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:32.791 02:12:47 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:32.791 02:12:47 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:32.791 02:12:47 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:32.791 02:12:47 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:32.791 02:12:47 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:32.791 02:12:47 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:32.791 02:12:47 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:32.791 02:12:47 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:32.791 02:12:47 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:32.791 02:12:47 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:32.792 02:12:47 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:32.792 02:12:47 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:32.792 02:12:47 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:32.792 02:12:47 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:32.792 02:12:47 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:32.792 02:12:47 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:32.792 02:12:47 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:32.792 02:12:47 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:32.792 02:12:47 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:32.792 02:12:47 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:33.049 02:12:47 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:33.049 02:12:47 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:33.049 02:12:47 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:33.049 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:33.049 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:15:33.049 00:15:33.049 --- 10.0.0.2 ping statistics --- 00:15:33.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:33.049 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:15:33.049 02:12:47 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:33.049 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:33.049 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:15:33.049 00:15:33.049 --- 10.0.0.3 ping statistics --- 00:15:33.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:33.049 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:15:33.049 02:12:47 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:33.049 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:33.049 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:15:33.049 00:15:33.049 --- 10.0.0.1 ping statistics --- 00:15:33.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:33.050 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:15:33.050 02:12:47 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:33.050 02:12:47 -- nvmf/common.sh@421 -- # return 0 00:15:33.050 02:12:47 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:33.050 02:12:47 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:33.050 02:12:47 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:33.050 02:12:47 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:33.050 02:12:47 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:33.050 02:12:47 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:33.050 02:12:47 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:33.050 02:12:47 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:15:33.050 02:12:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:33.050 02:12:47 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:33.050 02:12:47 -- common/autotest_common.sh@10 -- # set +x 00:15:33.050 02:12:47 -- nvmf/common.sh@469 -- # nvmfpid=73648 00:15:33.050 02:12:47 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:33.050 02:12:47 -- nvmf/common.sh@470 -- # waitforlisten 73648 00:15:33.050 02:12:47 -- common/autotest_common.sh@819 -- # '[' -z 73648 ']' 00:15:33.050 02:12:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:33.050 02:12:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:33.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:33.050 02:12:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:33.050 02:12:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:33.050 02:12:47 -- common/autotest_common.sh@10 -- # set +x 00:15:33.050 [2024-05-14 02:12:47.488753] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:15:33.050 [2024-05-14 02:12:47.488852] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:33.050 [2024-05-14 02:12:47.624394] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.308 [2024-05-14 02:12:47.691139] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:33.308 [2024-05-14 02:12:47.691313] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:33.308 [2024-05-14 02:12:47.691329] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:33.308 [2024-05-14 02:12:47.691339] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:33.308 [2024-05-14 02:12:47.691368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:34.242 02:12:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:34.242 02:12:48 -- common/autotest_common.sh@852 -- # return 0 00:15:34.242 02:12:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:34.242 02:12:48 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:34.242 02:12:48 -- common/autotest_common.sh@10 -- # set +x 00:15:34.242 02:12:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:34.242 02:12:48 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:15:34.242 02:12:48 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:15:34.242 02:12:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:34.242 02:12:48 -- common/autotest_common.sh@10 -- # set +x 00:15:34.242 [2024-05-14 02:12:48.540093] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:34.242 02:12:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:34.242 02:12:48 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:34.242 02:12:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:34.242 02:12:48 -- common/autotest_common.sh@10 -- # set +x 00:15:34.242 02:12:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:34.242 02:12:48 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:34.242 02:12:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:34.242 02:12:48 -- common/autotest_common.sh@10 -- # set +x 00:15:34.242 [2024-05-14 02:12:48.556186] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:34.242 02:12:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:34.242 02:12:48 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:34.242 02:12:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:34.242 02:12:48 -- common/autotest_common.sh@10 -- # set +x 00:15:34.242 02:12:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:34.242 02:12:48 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:15:34.242 02:12:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:34.242 02:12:48 -- common/autotest_common.sh@10 -- # set +x 00:15:34.242 malloc0 00:15:34.242 02:12:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:34.242 02:12:48 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:34.242 02:12:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:34.242 02:12:48 -- common/autotest_common.sh@10 -- # set +x 00:15:34.242 02:12:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:34.242 02:12:48 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:15:34.242 02:12:48 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:15:34.242 02:12:48 -- nvmf/common.sh@520 -- # config=() 00:15:34.242 02:12:48 -- nvmf/common.sh@520 -- # local subsystem config 00:15:34.242 02:12:48 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:34.242 02:12:48 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:34.242 { 00:15:34.242 "params": { 00:15:34.242 "name": "Nvme$subsystem", 00:15:34.242 "trtype": "$TEST_TRANSPORT", 00:15:34.242 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:34.242 "adrfam": "ipv4", 00:15:34.242 "trsvcid": "$NVMF_PORT", 00:15:34.242 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:34.242 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:34.242 "hdgst": ${hdgst:-false}, 00:15:34.242 "ddgst": ${ddgst:-false} 00:15:34.242 }, 00:15:34.242 "method": "bdev_nvme_attach_controller" 00:15:34.242 } 00:15:34.242 EOF 00:15:34.242 )") 00:15:34.242 02:12:48 -- nvmf/common.sh@542 -- # cat 00:15:34.242 02:12:48 -- nvmf/common.sh@544 -- # jq . 00:15:34.242 02:12:48 -- nvmf/common.sh@545 -- # IFS=, 00:15:34.242 02:12:48 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:34.242 "params": { 00:15:34.242 "name": "Nvme1", 00:15:34.242 "trtype": "tcp", 00:15:34.242 "traddr": "10.0.0.2", 00:15:34.242 "adrfam": "ipv4", 00:15:34.242 "trsvcid": "4420", 00:15:34.242 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:34.242 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:34.242 "hdgst": false, 00:15:34.242 "ddgst": false 00:15:34.242 }, 00:15:34.242 "method": "bdev_nvme_attach_controller" 00:15:34.242 }' 00:15:34.242 [2024-05-14 02:12:48.645918] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:15:34.242 [2024-05-14 02:12:48.646018] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73701 ] 00:15:34.242 [2024-05-14 02:12:48.785235] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:34.501 [2024-05-14 02:12:48.857491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.501 Running I/O for 10 seconds... 00:15:44.471 00:15:44.471 Latency(us) 00:15:44.471 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:44.471 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:15:44.471 Verification LBA range: start 0x0 length 0x1000 00:15:44.471 Nvme1n1 : 10.01 8741.70 68.29 0.00 0.00 14604.03 1392.64 26691.03 00:15:44.471 =================================================================================================================== 00:15:44.471 Total : 8741.70 68.29 0.00 0.00 14604.03 1392.64 26691.03 00:15:44.730 02:12:59 -- target/zcopy.sh@39 -- # perfpid=73812 00:15:44.730 02:12:59 -- target/zcopy.sh@41 -- # xtrace_disable 00:15:44.730 02:12:59 -- common/autotest_common.sh@10 -- # set +x 00:15:44.730 02:12:59 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:15:44.730 02:12:59 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:15:44.730 02:12:59 -- nvmf/common.sh@520 -- # config=() 00:15:44.730 02:12:59 -- nvmf/common.sh@520 -- # local subsystem config 00:15:44.730 02:12:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:44.730 02:12:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:44.730 { 00:15:44.730 "params": { 00:15:44.730 "name": "Nvme$subsystem", 00:15:44.730 "trtype": "$TEST_TRANSPORT", 00:15:44.730 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:44.730 "adrfam": "ipv4", 00:15:44.730 "trsvcid": "$NVMF_PORT", 00:15:44.730 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:44.730 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:44.730 "hdgst": ${hdgst:-false}, 00:15:44.730 "ddgst": ${ddgst:-false} 00:15:44.730 }, 00:15:44.730 "method": "bdev_nvme_attach_controller" 00:15:44.730 } 00:15:44.730 EOF 00:15:44.730 )") 00:15:44.730 [2024-05-14 02:12:59.206995] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.730 [2024-05-14 02:12:59.207031] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.730 02:12:59 -- nvmf/common.sh@542 -- # cat 00:15:44.730 2024/05/14 02:12:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.730 02:12:59 -- nvmf/common.sh@544 -- # jq . 00:15:44.730 [2024-05-14 02:12:59.214956] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.730 [2024-05-14 02:12:59.214984] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.730 02:12:59 -- nvmf/common.sh@545 -- # IFS=, 00:15:44.730 02:12:59 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:44.730 "params": { 00:15:44.730 "name": "Nvme1", 00:15:44.730 "trtype": "tcp", 00:15:44.730 "traddr": "10.0.0.2", 00:15:44.730 "adrfam": "ipv4", 00:15:44.730 "trsvcid": "4420", 00:15:44.730 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:44.730 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:44.730 "hdgst": false, 00:15:44.730 "ddgst": false 00:15:44.730 }, 00:15:44.730 "method": "bdev_nvme_attach_controller" 00:15:44.730 }' 00:15:44.730 2024/05/14 02:12:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.730 [2024-05-14 02:12:59.226983] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.730 [2024-05-14 02:12:59.227015] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.730 2024/05/14 02:12:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.730 [2024-05-14 02:12:59.239005] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.730 [2024-05-14 02:12:59.239046] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.730 2024/05/14 02:12:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.730 [2024-05-14 02:12:59.251009] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.730 [2024-05-14 02:12:59.251050] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.730 2024/05/14 02:12:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.730 [2024-05-14 02:12:59.260255] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:15:44.730 [2024-05-14 02:12:59.260929] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73812 ] 00:15:44.730 [2024-05-14 02:12:59.263005] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.730 [2024-05-14 02:12:59.263038] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.730 2024/05/14 02:12:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.730 [2024-05-14 02:12:59.274996] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.730 [2024-05-14 02:12:59.275026] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.730 2024/05/14 02:12:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.730 [2024-05-14 02:12:59.287005] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.730 [2024-05-14 02:12:59.287037] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.730 2024/05/14 02:12:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.730 [2024-05-14 02:12:59.298991] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.730 [2024-05-14 02:12:59.299022] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.730 2024/05/14 02:12:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.730 [2024-05-14 02:12:59.310987] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.730 [2024-05-14 02:12:59.311015] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.730 2024/05/14 02:12:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.990 [2024-05-14 02:12:59.323005] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.990 [2024-05-14 02:12:59.323039] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.990 2024/05/14 02:12:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.990 [2024-05-14 02:12:59.335004] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.990 [2024-05-14 02:12:59.335035] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.990 2024/05/14 02:12:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.990 [2024-05-14 02:12:59.347103] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.990 [2024-05-14 02:12:59.347178] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.990 2024/05/14 02:12:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.990 [2024-05-14 02:12:59.359026] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.990 [2024-05-14 02:12:59.359057] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.990 2024/05/14 02:12:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.990 [2024-05-14 02:12:59.371056] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.990 [2024-05-14 02:12:59.371102] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.990 2024/05/14 02:12:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.990 [2024-05-14 02:12:59.379019] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.990 [2024-05-14 02:12:59.379052] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.990 2024/05/14 02:12:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.990 [2024-05-14 02:12:59.391033] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.990 [2024-05-14 02:12:59.391068] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.990 2024/05/14 02:12:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.990 [2024-05-14 02:12:59.399036] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.990 [2024-05-14 02:12:59.399089] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.990 [2024-05-14 02:12:59.402266] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:44.990 2024/05/14 02:12:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.990 [2024-05-14 02:12:59.411055] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.990 [2024-05-14 02:12:59.411089] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.990 2024/05/14 02:12:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.990 [2024-05-14 02:12:59.423048] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.990 [2024-05-14 02:12:59.423094] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.990 2024/05/14 02:12:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.990 [2024-05-14 02:12:59.435093] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.990 [2024-05-14 02:12:59.435172] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.990 2024/05/14 02:12:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.990 [2024-05-14 02:12:59.447072] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.990 [2024-05-14 02:12:59.447111] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.991 2024/05/14 02:12:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.991 [2024-05-14 02:12:59.459059] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.991 [2024-05-14 02:12:59.459090] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.991 2024/05/14 02:12:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.991 [2024-05-14 02:12:59.471066] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.991 [2024-05-14 02:12:59.471099] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.991 2024/05/14 02:12:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.991 [2024-05-14 02:12:59.481425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.991 [2024-05-14 02:12:59.483068] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.991 [2024-05-14 02:12:59.483095] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.991 2024/05/14 02:12:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.991 [2024-05-14 02:12:59.495062] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.991 [2024-05-14 02:12:59.495090] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.991 2024/05/14 02:12:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.991 [2024-05-14 02:12:59.507118] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.991 [2024-05-14 02:12:59.507150] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.991 2024/05/14 02:12:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.991 [2024-05-14 02:12:59.519132] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.991 [2024-05-14 02:12:59.519170] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.991 2024/05/14 02:12:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.991 [2024-05-14 02:12:59.531087] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.991 [2024-05-14 02:12:59.531123] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.991 2024/05/14 02:12:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.991 [2024-05-14 02:12:59.543139] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.991 [2024-05-14 02:12:59.543195] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.991 2024/05/14 02:12:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.991 [2024-05-14 02:12:59.551105] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.991 [2024-05-14 02:12:59.551150] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.991 2024/05/14 02:12:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.991 [2024-05-14 02:12:59.563119] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.991 [2024-05-14 02:12:59.563165] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.991 2024/05/14 02:12:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.991 [2024-05-14 02:12:59.571093] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.991 [2024-05-14 02:12:59.571128] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.991 2024/05/14 02:12:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.250 [2024-05-14 02:12:59.583104] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.250 [2024-05-14 02:12:59.583134] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.250 2024/05/14 02:12:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.250 [2024-05-14 02:12:59.595104] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.250 [2024-05-14 02:12:59.595132] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.250 2024/05/14 02:12:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.250 [2024-05-14 02:12:59.607109] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.250 [2024-05-14 02:12:59.607143] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.250 2024/05/14 02:12:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.250 [2024-05-14 02:12:59.619153] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.250 [2024-05-14 02:12:59.619185] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.250 2024/05/14 02:12:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.250 Running I/O for 5 seconds... 00:15:45.250 [2024-05-14 02:12:59.635454] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.250 [2024-05-14 02:12:59.635491] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.250 2024/05/14 02:12:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.250 [2024-05-14 02:12:59.645076] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.250 [2024-05-14 02:12:59.645110] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.250 2024/05/14 02:12:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.250 [2024-05-14 02:12:59.661183] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.250 [2024-05-14 02:12:59.661232] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.250 2024/05/14 02:12:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.250 [2024-05-14 02:12:59.678237] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.250 [2024-05-14 02:12:59.678305] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.250 2024/05/14 02:12:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.250 [2024-05-14 02:12:59.695089] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.250 [2024-05-14 02:12:59.695143] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.250 2024/05/14 02:12:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.250 [2024-05-14 02:12:59.711924] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.250 [2024-05-14 02:12:59.711968] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.251 2024/05/14 02:12:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.251 [2024-05-14 02:12:59.727875] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.251 [2024-05-14 02:12:59.727913] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.251 2024/05/14 02:12:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.251 [2024-05-14 02:12:59.746585] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.251 [2024-05-14 02:12:59.746625] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.251 2024/05/14 02:12:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.251 [2024-05-14 02:12:59.761014] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.251 [2024-05-14 02:12:59.761051] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.251 2024/05/14 02:12:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.251 [2024-05-14 02:12:59.777526] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.251 [2024-05-14 02:12:59.777576] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.251 2024/05/14 02:12:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.251 [2024-05-14 02:12:59.795021] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.251 [2024-05-14 02:12:59.795059] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.251 2024/05/14 02:12:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.251 [2024-05-14 02:12:59.810434] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.251 [2024-05-14 02:12:59.810488] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.251 2024/05/14 02:12:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.251 [2024-05-14 02:12:59.827981] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.251 [2024-05-14 02:12:59.828054] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.251 2024/05/14 02:12:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.510 [2024-05-14 02:12:59.843032] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.510 [2024-05-14 02:12:59.843074] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.510 2024/05/14 02:12:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.510 [2024-05-14 02:12:59.853291] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.510 [2024-05-14 02:12:59.853329] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.510 2024/05/14 02:12:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.510 [2024-05-14 02:12:59.869091] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.510 [2024-05-14 02:12:59.869130] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.510 2024/05/14 02:12:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.510 [2024-05-14 02:12:59.886115] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.510 [2024-05-14 02:12:59.886153] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.510 2024/05/14 02:12:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.510 [2024-05-14 02:12:59.902122] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.510 [2024-05-14 02:12:59.902163] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.510 2024/05/14 02:12:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.510 [2024-05-14 02:12:59.918720] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.510 [2024-05-14 02:12:59.918780] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.510 2024/05/14 02:12:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.510 [2024-05-14 02:12:59.935319] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.510 [2024-05-14 02:12:59.935356] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.510 2024/05/14 02:12:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.510 [2024-05-14 02:12:59.952278] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.510 [2024-05-14 02:12:59.952336] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.510 2024/05/14 02:12:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.510 [2024-05-14 02:12:59.968627] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.510 [2024-05-14 02:12:59.968664] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.510 2024/05/14 02:12:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.510 [2024-05-14 02:12:59.986002] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.510 [2024-05-14 02:12:59.986038] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.510 2024/05/14 02:12:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.510 [2024-05-14 02:13:00.001558] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.510 [2024-05-14 02:13:00.001597] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.510 2024/05/14 02:13:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.510 [2024-05-14 02:13:00.011328] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.510 [2024-05-14 02:13:00.011367] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.510 2024/05/14 02:13:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.510 [2024-05-14 02:13:00.022305] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.510 [2024-05-14 02:13:00.022361] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.510 2024/05/14 02:13:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.510 [2024-05-14 02:13:00.036231] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.510 [2024-05-14 02:13:00.036280] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.510 2024/05/14 02:13:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.510 [2024-05-14 02:13:00.053623] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.510 [2024-05-14 02:13:00.053671] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.510 2024/05/14 02:13:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.510 [2024-05-14 02:13:00.064404] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.510 [2024-05-14 02:13:00.064441] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.510 2024/05/14 02:13:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.510 [2024-05-14 02:13:00.078978] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.510 [2024-05-14 02:13:00.079014] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.510 2024/05/14 02:13:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.510 [2024-05-14 02:13:00.096769] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.510 [2024-05-14 02:13:00.096845] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.770 2024/05/14 02:13:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.770 [2024-05-14 02:13:00.112160] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.770 [2024-05-14 02:13:00.112213] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.770 2024/05/14 02:13:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.770 [2024-05-14 02:13:00.127919] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.770 [2024-05-14 02:13:00.127954] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.770 2024/05/14 02:13:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.770 [2024-05-14 02:13:00.145951] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.770 [2024-05-14 02:13:00.145989] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.770 2024/05/14 02:13:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.770 [2024-05-14 02:13:00.161204] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.770 [2024-05-14 02:13:00.161241] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.770 2024/05/14 02:13:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.770 [2024-05-14 02:13:00.180412] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.770 [2024-05-14 02:13:00.180448] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.770 2024/05/14 02:13:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.770 [2024-05-14 02:13:00.196205] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.770 [2024-05-14 02:13:00.196258] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.770 2024/05/14 02:13:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.770 [2024-05-14 02:13:00.213112] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.770 [2024-05-14 02:13:00.213148] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.770 2024/05/14 02:13:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.770 [2024-05-14 02:13:00.231061] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.770 [2024-05-14 02:13:00.231129] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.770 2024/05/14 02:13:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.770 [2024-05-14 02:13:00.246099] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.770 [2024-05-14 02:13:00.246141] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.770 2024/05/14 02:13:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.771 [2024-05-14 02:13:00.262335] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.771 [2024-05-14 02:13:00.262372] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.771 2024/05/14 02:13:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.771 [2024-05-14 02:13:00.278624] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.771 [2024-05-14 02:13:00.278661] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.771 2024/05/14 02:13:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.771 [2024-05-14 02:13:00.296614] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.771 [2024-05-14 02:13:00.296678] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.771 2024/05/14 02:13:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.771 [2024-05-14 02:13:00.312275] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.771 [2024-05-14 02:13:00.312314] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.771 2024/05/14 02:13:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.771 [2024-05-14 02:13:00.330736] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.771 [2024-05-14 02:13:00.330818] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.771 2024/05/14 02:13:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.771 [2024-05-14 02:13:00.346329] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.771 [2024-05-14 02:13:00.346365] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.771 2024/05/14 02:13:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.030 [2024-05-14 02:13:00.363689] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.030 [2024-05-14 02:13:00.363726] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.030 2024/05/14 02:13:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.030 [2024-05-14 02:13:00.380412] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.030 [2024-05-14 02:13:00.380447] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.030 2024/05/14 02:13:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.030 [2024-05-14 02:13:00.396749] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.030 [2024-05-14 02:13:00.396798] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.030 2024/05/14 02:13:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.030 [2024-05-14 02:13:00.413101] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.030 [2024-05-14 02:13:00.413156] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.030 2024/05/14 02:13:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.030 [2024-05-14 02:13:00.429329] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.030 [2024-05-14 02:13:00.429367] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.030 2024/05/14 02:13:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.030 [2024-05-14 02:13:00.446463] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.030 [2024-05-14 02:13:00.446514] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.030 2024/05/14 02:13:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.030 [2024-05-14 02:13:00.463635] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.030 [2024-05-14 02:13:00.463673] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.030 2024/05/14 02:13:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.030 [2024-05-14 02:13:00.480212] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.030 [2024-05-14 02:13:00.480248] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.030 2024/05/14 02:13:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.030 [2024-05-14 02:13:00.496867] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.030 [2024-05-14 02:13:00.496901] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.030 2024/05/14 02:13:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.030 [2024-05-14 02:13:00.513729] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.030 [2024-05-14 02:13:00.513787] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.030 2024/05/14 02:13:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.030 [2024-05-14 02:13:00.524393] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.030 [2024-05-14 02:13:00.524429] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.030 2024/05/14 02:13:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.030 [2024-05-14 02:13:00.538918] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.030 [2024-05-14 02:13:00.538953] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.030 2024/05/14 02:13:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.030 [2024-05-14 02:13:00.550929] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.030 [2024-05-14 02:13:00.550966] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.030 2024/05/14 02:13:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.030 [2024-05-14 02:13:00.569538] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.030 [2024-05-14 02:13:00.569602] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.031 2024/05/14 02:13:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.031 [2024-05-14 02:13:00.584191] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.031 [2024-05-14 02:13:00.584228] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.031 2024/05/14 02:13:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.031 [2024-05-14 02:13:00.593172] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.031 [2024-05-14 02:13:00.593207] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.031 2024/05/14 02:13:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.031 [2024-05-14 02:13:00.608979] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.031 [2024-05-14 02:13:00.609015] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.031 2024/05/14 02:13:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.031 [2024-05-14 02:13:00.618336] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.031 [2024-05-14 02:13:00.618373] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.290 2024/05/14 02:13:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.290 [2024-05-14 02:13:00.634399] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.290 [2024-05-14 02:13:00.634436] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.290 2024/05/14 02:13:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.290 [2024-05-14 02:13:00.654152] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.290 [2024-05-14 02:13:00.654227] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.290 2024/05/14 02:13:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.290 [2024-05-14 02:13:00.669744] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.290 [2024-05-14 02:13:00.669790] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.290 2024/05/14 02:13:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.290 [2024-05-14 02:13:00.686339] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.290 [2024-05-14 02:13:00.686375] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.290 2024/05/14 02:13:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.290 [2024-05-14 02:13:00.705716] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.290 [2024-05-14 02:13:00.705752] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.290 2024/05/14 02:13:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.290 [2024-05-14 02:13:00.721685] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.290 [2024-05-14 02:13:00.721722] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.290 2024/05/14 02:13:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.290 [2024-05-14 02:13:00.738374] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.290 [2024-05-14 02:13:00.738413] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.290 2024/05/14 02:13:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.290 [2024-05-14 02:13:00.748738] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.290 [2024-05-14 02:13:00.748787] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.290 2024/05/14 02:13:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.290 [2024-05-14 02:13:00.764168] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.290 [2024-05-14 02:13:00.764215] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.290 2024/05/14 02:13:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.290 [2024-05-14 02:13:00.780902] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.290 [2024-05-14 02:13:00.780937] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.290 2024/05/14 02:13:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.290 [2024-05-14 02:13:00.797117] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.290 [2024-05-14 02:13:00.797153] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.290 2024/05/14 02:13:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.290 [2024-05-14 02:13:00.807701] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.290 [2024-05-14 02:13:00.807737] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.290 2024/05/14 02:13:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.290 [2024-05-14 02:13:00.820058] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.290 [2024-05-14 02:13:00.820094] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.291 2024/05/14 02:13:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.291 [2024-05-14 02:13:00.836311] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.291 [2024-05-14 02:13:00.836349] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.291 2024/05/14 02:13:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.291 [2024-05-14 02:13:00.852874] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.291 [2024-05-14 02:13:00.852948] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.291 2024/05/14 02:13:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.291 [2024-05-14 02:13:00.867921] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.291 [2024-05-14 02:13:00.867958] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.291 2024/05/14 02:13:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.550 [2024-05-14 02:13:00.883220] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.550 [2024-05-14 02:13:00.883260] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.550 2024/05/14 02:13:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.550 [2024-05-14 02:13:00.894102] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.550 [2024-05-14 02:13:00.894138] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.550 2024/05/14 02:13:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.550 [2024-05-14 02:13:00.910200] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.550 [2024-05-14 02:13:00.910251] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.550 2024/05/14 02:13:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.550 [2024-05-14 02:13:00.925103] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.550 [2024-05-14 02:13:00.925156] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.550 2024/05/14 02:13:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.550 [2024-05-14 02:13:00.940166] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.550 [2024-05-14 02:13:00.940201] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.550 2024/05/14 02:13:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.550 [2024-05-14 02:13:00.956716] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.550 [2024-05-14 02:13:00.956755] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.550 2024/05/14 02:13:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.550 [2024-05-14 02:13:00.973898] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.550 [2024-05-14 02:13:00.973953] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.550 2024/05/14 02:13:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.550 [2024-05-14 02:13:00.990192] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.550 [2024-05-14 02:13:00.990226] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.550 2024/05/14 02:13:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.550 [2024-05-14 02:13:01.008967] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.550 [2024-05-14 02:13:01.009004] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.550 2024/05/14 02:13:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.550 [2024-05-14 02:13:01.024096] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.550 [2024-05-14 02:13:01.024133] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.550 2024/05/14 02:13:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.550 [2024-05-14 02:13:01.041699] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.550 [2024-05-14 02:13:01.041735] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.550 2024/05/14 02:13:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.550 [2024-05-14 02:13:01.057192] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.550 [2024-05-14 02:13:01.057231] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.550 2024/05/14 02:13:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.550 [2024-05-14 02:13:01.067905] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.550 [2024-05-14 02:13:01.067940] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.550 2024/05/14 02:13:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.550 [2024-05-14 02:13:01.083068] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.550 [2024-05-14 02:13:01.083105] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.550 2024/05/14 02:13:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.550 [2024-05-14 02:13:01.099988] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.550 [2024-05-14 02:13:01.100027] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.550 2024/05/14 02:13:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.550 [2024-05-14 02:13:01.118215] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.550 [2024-05-14 02:13:01.118253] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.550 2024/05/14 02:13:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.550 [2024-05-14 02:13:01.133608] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.550 [2024-05-14 02:13:01.133648] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.550 2024/05/14 02:13:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.809 [2024-05-14 02:13:01.152334] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.809 [2024-05-14 02:13:01.152372] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.809 2024/05/14 02:13:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.809 [2024-05-14 02:13:01.167929] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.809 [2024-05-14 02:13:01.167969] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.809 2024/05/14 02:13:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.809 [2024-05-14 02:13:01.186290] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.809 [2024-05-14 02:13:01.186333] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.809 2024/05/14 02:13:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.809 [2024-05-14 02:13:01.202176] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.809 [2024-05-14 02:13:01.202220] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.809 2024/05/14 02:13:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.809 [2024-05-14 02:13:01.218466] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.809 [2024-05-14 02:13:01.218504] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.809 2024/05/14 02:13:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.809 [2024-05-14 02:13:01.236224] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.809 [2024-05-14 02:13:01.236263] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.809 2024/05/14 02:13:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.809 [2024-05-14 02:13:01.251485] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.809 [2024-05-14 02:13:01.251524] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.809 2024/05/14 02:13:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.809 [2024-05-14 02:13:01.261924] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.809 [2024-05-14 02:13:01.261959] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.809 2024/05/14 02:13:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.809 [2024-05-14 02:13:01.276197] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.809 [2024-05-14 02:13:01.276233] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.809 2024/05/14 02:13:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.809 [2024-05-14 02:13:01.285550] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.809 [2024-05-14 02:13:01.285586] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.809 2024/05/14 02:13:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.809 [2024-05-14 02:13:01.301760] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.809 [2024-05-14 02:13:01.301809] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.809 2024/05/14 02:13:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.809 [2024-05-14 02:13:01.318646] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.809 [2024-05-14 02:13:01.318684] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.809 2024/05/14 02:13:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.809 [2024-05-14 02:13:01.335220] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.809 [2024-05-14 02:13:01.335257] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.809 2024/05/14 02:13:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.809 [2024-05-14 02:13:01.351376] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.809 [2024-05-14 02:13:01.351414] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.810 2024/05/14 02:13:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.810 [2024-05-14 02:13:01.369027] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.810 [2024-05-14 02:13:01.369064] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.810 2024/05/14 02:13:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.810 [2024-05-14 02:13:01.384912] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.810 [2024-05-14 02:13:01.384948] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.810 2024/05/14 02:13:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.069 [2024-05-14 02:13:01.402237] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.069 [2024-05-14 02:13:01.402275] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.069 2024/05/14 02:13:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.069 [2024-05-14 02:13:01.419527] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.069 [2024-05-14 02:13:01.419565] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.069 2024/05/14 02:13:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.069 [2024-05-14 02:13:01.435592] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.069 [2024-05-14 02:13:01.435629] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.069 2024/05/14 02:13:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.069 [2024-05-14 02:13:01.453089] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.069 [2024-05-14 02:13:01.453140] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.069 2024/05/14 02:13:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.069 [2024-05-14 02:13:01.470048] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.069 [2024-05-14 02:13:01.470086] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.069 2024/05/14 02:13:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.069 [2024-05-14 02:13:01.480682] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.069 [2024-05-14 02:13:01.480719] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.069 2024/05/14 02:13:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.069 [2024-05-14 02:13:01.496503] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.069 [2024-05-14 02:13:01.496539] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.069 2024/05/14 02:13:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.069 [2024-05-14 02:13:01.506580] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.069 [2024-05-14 02:13:01.506617] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.069 2024/05/14 02:13:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.069 [2024-05-14 02:13:01.521115] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.069 [2024-05-14 02:13:01.521151] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.069 2024/05/14 02:13:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.069 [2024-05-14 02:13:01.531106] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.069 [2024-05-14 02:13:01.531143] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.069 2024/05/14 02:13:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.069 [2024-05-14 02:13:01.547413] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.069 [2024-05-14 02:13:01.547491] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.069 2024/05/14 02:13:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.069 [2024-05-14 02:13:01.564181] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.069 [2024-05-14 02:13:01.564265] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.069 2024/05/14 02:13:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.069 [2024-05-14 02:13:01.581356] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.069 [2024-05-14 02:13:01.581428] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.069 2024/05/14 02:13:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.069 [2024-05-14 02:13:01.596516] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.069 [2024-05-14 02:13:01.596554] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.069 2024/05/14 02:13:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.069 [2024-05-14 02:13:01.606669] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.069 [2024-05-14 02:13:01.606706] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.069 2024/05/14 02:13:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.069 [2024-05-14 02:13:01.622645] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.069 [2024-05-14 02:13:01.622684] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.069 2024/05/14 02:13:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.069 [2024-05-14 02:13:01.638616] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.069 [2024-05-14 02:13:01.638655] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.069 2024/05/14 02:13:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.069 [2024-05-14 02:13:01.657211] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.069 [2024-05-14 02:13:01.657253] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.328 2024/05/14 02:13:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.328 [2024-05-14 02:13:01.672730] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.328 [2024-05-14 02:13:01.672822] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.328 2024/05/14 02:13:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.328 [2024-05-14 02:13:01.690771] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.328 [2024-05-14 02:13:01.690819] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.328 2024/05/14 02:13:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.328 [2024-05-14 02:13:01.707367] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.328 [2024-05-14 02:13:01.707404] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.328 2024/05/14 02:13:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.328 [2024-05-14 02:13:01.723621] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.328 [2024-05-14 02:13:01.723657] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.328 2024/05/14 02:13:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.328 [2024-05-14 02:13:01.739226] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.328 [2024-05-14 02:13:01.739278] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.328 2024/05/14 02:13:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.328 [2024-05-14 02:13:01.749402] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.328 [2024-05-14 02:13:01.749438] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.328 2024/05/14 02:13:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.328 [2024-05-14 02:13:01.764898] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.328 [2024-05-14 02:13:01.764959] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.328 2024/05/14 02:13:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.328 [2024-05-14 02:13:01.782117] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.328 [2024-05-14 02:13:01.782184] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.328 2024/05/14 02:13:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.328 [2024-05-14 02:13:01.799082] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.328 [2024-05-14 02:13:01.799117] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.328 2024/05/14 02:13:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.328 [2024-05-14 02:13:01.815348] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.328 [2024-05-14 02:13:01.815383] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.329 2024/05/14 02:13:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.329 [2024-05-14 02:13:01.832086] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.329 [2024-05-14 02:13:01.832137] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.329 2024/05/14 02:13:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.329 [2024-05-14 02:13:01.848146] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.329 [2024-05-14 02:13:01.848191] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.329 2024/05/14 02:13:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.329 [2024-05-14 02:13:01.866083] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.329 [2024-05-14 02:13:01.866119] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.329 2024/05/14 02:13:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.329 [2024-05-14 02:13:01.881165] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.329 [2024-05-14 02:13:01.881199] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.329 2024/05/14 02:13:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.329 [2024-05-14 02:13:01.891299] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.329 [2024-05-14 02:13:01.891336] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.329 2024/05/14 02:13:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.329 [2024-05-14 02:13:01.902612] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.329 [2024-05-14 02:13:01.902650] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.329 2024/05/14 02:13:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.588 [2024-05-14 02:13:01.920047] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.588 [2024-05-14 02:13:01.920085] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.588 2024/05/14 02:13:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.588 [2024-05-14 02:13:01.937439] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.588 [2024-05-14 02:13:01.937478] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.588 2024/05/14 02:13:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.588 [2024-05-14 02:13:01.952149] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.588 [2024-05-14 02:13:01.952213] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.588 2024/05/14 02:13:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.588 [2024-05-14 02:13:01.970279] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.588 [2024-05-14 02:13:01.970324] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.588 2024/05/14 02:13:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.588 [2024-05-14 02:13:01.985789] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.588 [2024-05-14 02:13:01.985822] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.588 2024/05/14 02:13:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.588 [2024-05-14 02:13:02.003002] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.588 [2024-05-14 02:13:02.003038] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.588 2024/05/14 02:13:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.588 [2024-05-14 02:13:02.019927] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.588 [2024-05-14 02:13:02.019978] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.588 2024/05/14 02:13:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.588 [2024-05-14 02:13:02.030965] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.588 [2024-05-14 02:13:02.031002] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.588 2024/05/14 02:13:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.588 [2024-05-14 02:13:02.046807] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.588 [2024-05-14 02:13:02.046881] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.588 2024/05/14 02:13:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.588 [2024-05-14 02:13:02.063626] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.588 [2024-05-14 02:13:02.063662] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.588 2024/05/14 02:13:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.588 [2024-05-14 02:13:02.081262] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.588 [2024-05-14 02:13:02.081559] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.588 2024/05/14 02:13:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.588 [2024-05-14 02:13:02.097032] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.588 [2024-05-14 02:13:02.097278] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.588 2024/05/14 02:13:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.588 [2024-05-14 02:13:02.115683] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.588 [2024-05-14 02:13:02.115747] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.588 2024/05/14 02:13:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.588 [2024-05-14 02:13:02.131392] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.588 [2024-05-14 02:13:02.131430] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.588 2024/05/14 02:13:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.588 [2024-05-14 02:13:02.148122] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.588 [2024-05-14 02:13:02.148159] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.588 2024/05/14 02:13:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.588 [2024-05-14 02:13:02.157725] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.588 [2024-05-14 02:13:02.157777] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.588 2024/05/14 02:13:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.588 [2024-05-14 02:13:02.172540] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.588 [2024-05-14 02:13:02.172579] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.847 2024/05/14 02:13:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.847 [2024-05-14 02:13:02.191811] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.847 [2024-05-14 02:13:02.191896] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.847 2024/05/14 02:13:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.847 [2024-05-14 02:13:02.207672] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.847 [2024-05-14 02:13:02.207710] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.847 2024/05/14 02:13:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.847 [2024-05-14 02:13:02.225245] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.847 [2024-05-14 02:13:02.225283] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.847 2024/05/14 02:13:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.847 [2024-05-14 02:13:02.241532] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.847 [2024-05-14 02:13:02.241568] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.847 2024/05/14 02:13:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.847 [2024-05-14 02:13:02.259026] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.847 [2024-05-14 02:13:02.259064] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.847 2024/05/14 02:13:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.847 [2024-05-14 02:13:02.274856] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.847 [2024-05-14 02:13:02.274902] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.847 2024/05/14 02:13:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.847 [2024-05-14 02:13:02.293008] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.847 [2024-05-14 02:13:02.293052] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.847 2024/05/14 02:13:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.847 [2024-05-14 02:13:02.309137] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.847 [2024-05-14 02:13:02.309194] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.847 2024/05/14 02:13:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.847 [2024-05-14 02:13:02.326261] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.847 [2024-05-14 02:13:02.326300] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.847 2024/05/14 02:13:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.847 [2024-05-14 02:13:02.343117] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.847 [2024-05-14 02:13:02.343157] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.847 2024/05/14 02:13:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.847 [2024-05-14 02:13:02.360728] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.847 [2024-05-14 02:13:02.360775] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.847 2024/05/14 02:13:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.847 [2024-05-14 02:13:02.377515] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.847 [2024-05-14 02:13:02.377551] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.847 2024/05/14 02:13:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.847 [2024-05-14 02:13:02.394401] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.847 [2024-05-14 02:13:02.394484] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.847 2024/05/14 02:13:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.847 [2024-05-14 02:13:02.411088] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.847 [2024-05-14 02:13:02.411134] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.847 2024/05/14 02:13:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.847 [2024-05-14 02:13:02.427230] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.847 [2024-05-14 02:13:02.427267] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.847 2024/05/14 02:13:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.106 [2024-05-14 02:13:02.445042] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.106 [2024-05-14 02:13:02.445126] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.106 2024/05/14 02:13:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.106 [2024-05-14 02:13:02.460830] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.106 [2024-05-14 02:13:02.460884] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.106 2024/05/14 02:13:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.106 [2024-05-14 02:13:02.478357] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.106 [2024-05-14 02:13:02.478399] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.106 2024/05/14 02:13:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.106 [2024-05-14 02:13:02.493362] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.106 [2024-05-14 02:13:02.493405] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.106 2024/05/14 02:13:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.106 [2024-05-14 02:13:02.510516] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.106 [2024-05-14 02:13:02.510561] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.106 2024/05/14 02:13:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.106 [2024-05-14 02:13:02.526100] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.106 [2024-05-14 02:13:02.526155] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.106 2024/05/14 02:13:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.106 [2024-05-14 02:13:02.536233] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.106 [2024-05-14 02:13:02.536269] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.106 2024/05/14 02:13:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.106 [2024-05-14 02:13:02.550832] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.106 [2024-05-14 02:13:02.550881] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.106 2024/05/14 02:13:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.106 [2024-05-14 02:13:02.568511] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.106 [2024-05-14 02:13:02.568553] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.107 2024/05/14 02:13:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.107 [2024-05-14 02:13:02.583699] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.107 [2024-05-14 02:13:02.583750] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.107 2024/05/14 02:13:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.107 [2024-05-14 02:13:02.602498] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.107 [2024-05-14 02:13:02.602536] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.107 2024/05/14 02:13:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.107 [2024-05-14 02:13:02.618803] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.107 [2024-05-14 02:13:02.618870] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.107 2024/05/14 02:13:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.107 [2024-05-14 02:13:02.635333] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.107 [2024-05-14 02:13:02.635370] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.107 2024/05/14 02:13:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.107 [2024-05-14 02:13:02.652241] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.107 [2024-05-14 02:13:02.652293] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.107 2024/05/14 02:13:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.107 [2024-05-14 02:13:02.669429] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.107 [2024-05-14 02:13:02.669466] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.107 2024/05/14 02:13:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.107 [2024-05-14 02:13:02.685839] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.107 [2024-05-14 02:13:02.685875] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.107 2024/05/14 02:13:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.366 [2024-05-14 02:13:02.703211] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.366 [2024-05-14 02:13:02.703303] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.366 2024/05/14 02:13:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.366 [2024-05-14 02:13:02.718871] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.366 [2024-05-14 02:13:02.718924] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.366 2024/05/14 02:13:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.366 [2024-05-14 02:13:02.735449] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.366 [2024-05-14 02:13:02.735488] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.366 2024/05/14 02:13:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.366 [2024-05-14 02:13:02.750948] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.366 [2024-05-14 02:13:02.750984] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.366 2024/05/14 02:13:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.366 [2024-05-14 02:13:02.766873] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.366 [2024-05-14 02:13:02.766908] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.366 2024/05/14 02:13:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.366 [2024-05-14 02:13:02.783728] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.366 [2024-05-14 02:13:02.783777] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.366 2024/05/14 02:13:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.366 [2024-05-14 02:13:02.800239] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.366 [2024-05-14 02:13:02.800278] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.366 2024/05/14 02:13:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.366 [2024-05-14 02:13:02.816494] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.366 [2024-05-14 02:13:02.816530] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.366 2024/05/14 02:13:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.366 [2024-05-14 02:13:02.833894] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.366 [2024-05-14 02:13:02.833952] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.366 2024/05/14 02:13:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.366 [2024-05-14 02:13:02.849800] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.366 [2024-05-14 02:13:02.849836] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.366 2024/05/14 02:13:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.366 [2024-05-14 02:13:02.858594] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.366 [2024-05-14 02:13:02.858629] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.366 2024/05/14 02:13:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.366 [2024-05-14 02:13:02.875225] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.366 [2024-05-14 02:13:02.875261] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.366 2024/05/14 02:13:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.366 [2024-05-14 02:13:02.891419] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.366 [2024-05-14 02:13:02.891456] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.366 2024/05/14 02:13:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.366 [2024-05-14 02:13:02.909812] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.366 [2024-05-14 02:13:02.909856] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.366 2024/05/14 02:13:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.366 [2024-05-14 02:13:02.925429] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.366 [2024-05-14 02:13:02.925466] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.366 2024/05/14 02:13:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.366 [2024-05-14 02:13:02.943414] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.366 [2024-05-14 02:13:02.943451] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.366 2024/05/14 02:13:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.366 [2024-05-14 02:13:02.954109] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.366 [2024-05-14 02:13:02.954158] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.625 2024/05/14 02:13:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.625 [2024-05-14 02:13:02.967221] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.625 [2024-05-14 02:13:02.967257] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.625 2024/05/14 02:13:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.625 [2024-05-14 02:13:02.982868] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.625 [2024-05-14 02:13:02.982910] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.625 2024/05/14 02:13:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.625 [2024-05-14 02:13:02.992595] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.625 [2024-05-14 02:13:02.992632] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.625 2024/05/14 02:13:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.625 [2024-05-14 02:13:03.008597] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.625 [2024-05-14 02:13:03.008648] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.625 2024/05/14 02:13:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.625 [2024-05-14 02:13:03.024132] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.625 [2024-05-14 02:13:03.024168] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.625 2024/05/14 02:13:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.625 [2024-05-14 02:13:03.040478] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.625 [2024-05-14 02:13:03.040521] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.625 2024/05/14 02:13:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.625 [2024-05-14 02:13:03.059565] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.625 [2024-05-14 02:13:03.059635] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.625 2024/05/14 02:13:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.625 [2024-05-14 02:13:03.075594] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.625 [2024-05-14 02:13:03.075646] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.625 2024/05/14 02:13:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.625 [2024-05-14 02:13:03.091399] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.625 [2024-05-14 02:13:03.091453] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.625 2024/05/14 02:13:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.625 [2024-05-14 02:13:03.100665] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.625 [2024-05-14 02:13:03.100707] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.625 2024/05/14 02:13:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.625 [2024-05-14 02:13:03.115990] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.625 [2024-05-14 02:13:03.116073] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.625 2024/05/14 02:13:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.625 [2024-05-14 02:13:03.126491] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.625 [2024-05-14 02:13:03.126527] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.625 2024/05/14 02:13:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.625 [2024-05-14 02:13:03.142570] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.625 [2024-05-14 02:13:03.142610] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.625 2024/05/14 02:13:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.625 [2024-05-14 02:13:03.159412] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.625 [2024-05-14 02:13:03.159449] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.625 2024/05/14 02:13:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.625 [2024-05-14 02:13:03.176463] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.625 [2024-05-14 02:13:03.176499] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.625 2024/05/14 02:13:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.625 [2024-05-14 02:13:03.191742] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.625 [2024-05-14 02:13:03.191792] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.625 2024/05/14 02:13:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.625 [2024-05-14 02:13:03.201433] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.625 [2024-05-14 02:13:03.201470] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.625 2024/05/14 02:13:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.884 [2024-05-14 02:13:03.213790] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.884 [2024-05-14 02:13:03.213826] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.884 2024/05/14 02:13:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.884 [2024-05-14 02:13:03.229565] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.884 [2024-05-14 02:13:03.229603] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.884 2024/05/14 02:13:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.884 [2024-05-14 02:13:03.239994] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.884 [2024-05-14 02:13:03.240038] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.884 2024/05/14 02:13:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.884 [2024-05-14 02:13:03.255310] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.884 [2024-05-14 02:13:03.255345] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.884 2024/05/14 02:13:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.884 [2024-05-14 02:13:03.272586] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.884 [2024-05-14 02:13:03.272623] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.884 2024/05/14 02:13:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.884 [2024-05-14 02:13:03.288285] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.884 [2024-05-14 02:13:03.288339] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.884 2024/05/14 02:13:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.884 [2024-05-14 02:13:03.304729] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.884 [2024-05-14 02:13:03.304778] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.884 2024/05/14 02:13:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.884 [2024-05-14 02:13:03.315939] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.884 [2024-05-14 02:13:03.315987] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.884 2024/05/14 02:13:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.884 [2024-05-14 02:13:03.331323] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.884 [2024-05-14 02:13:03.331361] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.884 2024/05/14 02:13:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.884 [2024-05-14 02:13:03.350382] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.884 [2024-05-14 02:13:03.350420] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.884 2024/05/14 02:13:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.884 [2024-05-14 02:13:03.365038] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.884 [2024-05-14 02:13:03.365089] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.884 2024/05/14 02:13:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.884 [2024-05-14 02:13:03.381403] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.884 [2024-05-14 02:13:03.381445] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.884 2024/05/14 02:13:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.884 [2024-05-14 02:13:03.396927] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.884 [2024-05-14 02:13:03.396977] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.884 2024/05/14 02:13:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.884 [2024-05-14 02:13:03.407727] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.885 [2024-05-14 02:13:03.407776] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.885 2024/05/14 02:13:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.885 [2024-05-14 02:13:03.419111] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.885 [2024-05-14 02:13:03.419148] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.885 2024/05/14 02:13:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.885 [2024-05-14 02:13:03.434040] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.885 [2024-05-14 02:13:03.434079] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.885 2024/05/14 02:13:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.885 [2024-05-14 02:13:03.450888] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.885 [2024-05-14 02:13:03.450926] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.885 2024/05/14 02:13:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.885 [2024-05-14 02:13:03.466229] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.885 [2024-05-14 02:13:03.466279] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.885 2024/05/14 02:13:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.147 [2024-05-14 02:13:03.483727] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.147 [2024-05-14 02:13:03.483817] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.147 2024/05/14 02:13:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.147 [2024-05-14 02:13:03.501058] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.147 [2024-05-14 02:13:03.501095] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.147 2024/05/14 02:13:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.147 [2024-05-14 02:13:03.516833] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.147 [2024-05-14 02:13:03.516868] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.148 2024/05/14 02:13:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.148 [2024-05-14 02:13:03.533779] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.148 [2024-05-14 02:13:03.533813] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.148 2024/05/14 02:13:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.148 [2024-05-14 02:13:03.550109] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.148 [2024-05-14 02:13:03.550146] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.148 2024/05/14 02:13:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.148 [2024-05-14 02:13:03.567203] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.148 [2024-05-14 02:13:03.567237] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.148 2024/05/14 02:13:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.148 [2024-05-14 02:13:03.582731] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.148 [2024-05-14 02:13:03.582779] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.148 2024/05/14 02:13:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.148 [2024-05-14 02:13:03.592911] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.148 [2024-05-14 02:13:03.592946] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.148 2024/05/14 02:13:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.148 [2024-05-14 02:13:03.608076] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.148 [2024-05-14 02:13:03.608114] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.148 2024/05/14 02:13:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.148 [2024-05-14 02:13:03.623655] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.148 [2024-05-14 02:13:03.623706] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.148 2024/05/14 02:13:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.148 [2024-05-14 02:13:03.641220] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.148 [2024-05-14 02:13:03.641259] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.148 2024/05/14 02:13:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.148 [2024-05-14 02:13:03.651560] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.148 [2024-05-14 02:13:03.651598] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.148 2024/05/14 02:13:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.148 [2024-05-14 02:13:03.666284] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.148 [2024-05-14 02:13:03.666322] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.148 2024/05/14 02:13:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.148 [2024-05-14 02:13:03.676336] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.148 [2024-05-14 02:13:03.676394] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.148 2024/05/14 02:13:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.148 [2024-05-14 02:13:03.690660] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.148 [2024-05-14 02:13:03.690719] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.148 2024/05/14 02:13:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.148 [2024-05-14 02:13:03.701471] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.148 [2024-05-14 02:13:03.701685] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.148 2024/05/14 02:13:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.148 [2024-05-14 02:13:03.716431] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.148 [2024-05-14 02:13:03.716585] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.148 2024/05/14 02:13:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.148 [2024-05-14 02:13:03.733711] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.148 [2024-05-14 02:13:03.733882] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.409 2024/05/14 02:13:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.409 [2024-05-14 02:13:03.749562] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.409 [2024-05-14 02:13:03.749600] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.410 2024/05/14 02:13:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.410 [2024-05-14 02:13:03.767181] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.410 [2024-05-14 02:13:03.767219] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.410 2024/05/14 02:13:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.410 [2024-05-14 02:13:03.782859] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.410 [2024-05-14 02:13:03.782896] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.410 2024/05/14 02:13:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.410 [2024-05-14 02:13:03.792942] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.410 [2024-05-14 02:13:03.792977] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.410 2024/05/14 02:13:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.410 [2024-05-14 02:13:03.807359] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.410 [2024-05-14 02:13:03.807395] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.410 2024/05/14 02:13:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.410 [2024-05-14 02:13:03.825242] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.410 [2024-05-14 02:13:03.825280] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.410 2024/05/14 02:13:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.410 [2024-05-14 02:13:03.840246] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.410 [2024-05-14 02:13:03.840284] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.410 2024/05/14 02:13:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.410 [2024-05-14 02:13:03.858410] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.410 [2024-05-14 02:13:03.858475] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.410 2024/05/14 02:13:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.410 [2024-05-14 02:13:03.873716] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.410 [2024-05-14 02:13:03.873755] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.410 2024/05/14 02:13:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.410 [2024-05-14 02:13:03.883929] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.410 [2024-05-14 02:13:03.883965] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.410 2024/05/14 02:13:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.410 [2024-05-14 02:13:03.898433] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.410 [2024-05-14 02:13:03.898472] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.410 2024/05/14 02:13:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.410 [2024-05-14 02:13:03.916527] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.410 [2024-05-14 02:13:03.916565] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.410 2024/05/14 02:13:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.410 [2024-05-14 02:13:03.931811] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.410 [2024-05-14 02:13:03.931848] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.410 2024/05/14 02:13:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.410 [2024-05-14 02:13:03.949457] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.410 [2024-05-14 02:13:03.949497] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.410 2024/05/14 02:13:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.410 [2024-05-14 02:13:03.965134] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.410 [2024-05-14 02:13:03.965174] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.410 2024/05/14 02:13:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.410 [2024-05-14 02:13:03.976613] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.410 [2024-05-14 02:13:03.976651] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.410 2024/05/14 02:13:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.410 [2024-05-14 02:13:03.993716] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.410 [2024-05-14 02:13:03.993760] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.410 2024/05/14 02:13:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.669 [2024-05-14 02:13:04.004752] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.669 [2024-05-14 02:13:04.004809] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.669 2024/05/14 02:13:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.669 [2024-05-14 02:13:04.020998] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.669 [2024-05-14 02:13:04.021034] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.669 2024/05/14 02:13:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.669 [2024-05-14 02:13:04.038933] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.669 [2024-05-14 02:13:04.038969] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.669 2024/05/14 02:13:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.669 [2024-05-14 02:13:04.048904] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.669 [2024-05-14 02:13:04.048939] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.669 2024/05/14 02:13:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.669 [2024-05-14 02:13:04.063460] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.669 [2024-05-14 02:13:04.063498] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.669 2024/05/14 02:13:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.669 [2024-05-14 02:13:04.073376] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.669 [2024-05-14 02:13:04.073414] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.669 2024/05/14 02:13:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.669 [2024-05-14 02:13:04.089001] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.669 [2024-05-14 02:13:04.089076] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.669 2024/05/14 02:13:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.669 [2024-05-14 02:13:04.106540] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.669 [2024-05-14 02:13:04.106586] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.669 2024/05/14 02:13:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.669 [2024-05-14 02:13:04.121571] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.669 [2024-05-14 02:13:04.121607] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.669 2024/05/14 02:13:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.669 [2024-05-14 02:13:04.139431] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.669 [2024-05-14 02:13:04.139467] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.669 2024/05/14 02:13:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.669 [2024-05-14 02:13:04.154784] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.669 [2024-05-14 02:13:04.154819] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.669 2024/05/14 02:13:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.669 [2024-05-14 02:13:04.172391] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.669 [2024-05-14 02:13:04.172428] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.669 2024/05/14 02:13:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.669 [2024-05-14 02:13:04.187726] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.669 [2024-05-14 02:13:04.187758] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.669 2024/05/14 02:13:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.669 [2024-05-14 02:13:04.205205] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.669 [2024-05-14 02:13:04.205237] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.669 2024/05/14 02:13:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.669 [2024-05-14 02:13:04.215498] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.669 [2024-05-14 02:13:04.215533] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.669 2024/05/14 02:13:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.669 [2024-05-14 02:13:04.229971] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.669 [2024-05-14 02:13:04.230003] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.669 2024/05/14 02:13:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.669 [2024-05-14 02:13:04.241452] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.669 [2024-05-14 02:13:04.241484] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.669 2024/05/14 02:13:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.928 [2024-05-14 02:13:04.259229] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.928 [2024-05-14 02:13:04.259263] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.928 2024/05/14 02:13:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.928 [2024-05-14 02:13:04.273826] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.928 [2024-05-14 02:13:04.273860] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.928 2024/05/14 02:13:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.928 [2024-05-14 02:13:04.289311] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.928 [2024-05-14 02:13:04.289345] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.928 2024/05/14 02:13:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.928 [2024-05-14 02:13:04.298194] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.928 [2024-05-14 02:13:04.298225] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.928 2024/05/14 02:13:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.928 [2024-05-14 02:13:04.313799] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.928 [2024-05-14 02:13:04.313831] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.928 2024/05/14 02:13:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.929 [2024-05-14 02:13:04.329391] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.929 [2024-05-14 02:13:04.329430] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.929 2024/05/14 02:13:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.929 [2024-05-14 02:13:04.339134] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.929 [2024-05-14 02:13:04.339165] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.929 2024/05/14 02:13:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.929 [2024-05-14 02:13:04.354526] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.929 [2024-05-14 02:13:04.354565] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.929 2024/05/14 02:13:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.929 [2024-05-14 02:13:04.371281] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.929 [2024-05-14 02:13:04.371312] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.929 2024/05/14 02:13:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.929 [2024-05-14 02:13:04.389053] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.929 [2024-05-14 02:13:04.389087] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.929 2024/05/14 02:13:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.929 [2024-05-14 02:13:04.405171] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.929 [2024-05-14 02:13:04.405203] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.929 2024/05/14 02:13:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.929 [2024-05-14 02:13:04.424662] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.929 [2024-05-14 02:13:04.424712] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.929 2024/05/14 02:13:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.929 [2024-05-14 02:13:04.439650] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.929 [2024-05-14 02:13:04.439693] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.929 2024/05/14 02:13:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.929 [2024-05-14 02:13:04.449886] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.929 [2024-05-14 02:13:04.449917] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.929 2024/05/14 02:13:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.929 [2024-05-14 02:13:04.464123] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.929 [2024-05-14 02:13:04.464155] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.929 2024/05/14 02:13:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.929 [2024-05-14 02:13:04.480544] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.929 [2024-05-14 02:13:04.480576] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.929 2024/05/14 02:13:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.929 [2024-05-14 02:13:04.498145] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.929 [2024-05-14 02:13:04.498200] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.929 2024/05/14 02:13:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.929 [2024-05-14 02:13:04.513270] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.929 [2024-05-14 02:13:04.513305] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.189 2024/05/14 02:13:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.189 [2024-05-14 02:13:04.530157] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.189 [2024-05-14 02:13:04.530199] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.189 2024/05/14 02:13:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.189 [2024-05-14 02:13:04.546829] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.189 [2024-05-14 02:13:04.546860] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.189 2024/05/14 02:13:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.189 [2024-05-14 02:13:04.563646] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.189 [2024-05-14 02:13:04.563680] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.189 2024/05/14 02:13:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.189 [2024-05-14 02:13:04.579987] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.189 [2024-05-14 02:13:04.580019] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.189 2024/05/14 02:13:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.189 [2024-05-14 02:13:04.596332] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.189 [2024-05-14 02:13:04.596365] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.189 2024/05/14 02:13:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.189 [2024-05-14 02:13:04.613003] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.189 [2024-05-14 02:13:04.613035] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.189 2024/05/14 02:13:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.189 [2024-05-14 02:13:04.630249] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.189 [2024-05-14 02:13:04.630289] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.189 2024/05/14 02:13:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.189 00:15:50.189 Latency(us) 00:15:50.189 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:50.189 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:15:50.189 Nvme1n1 : 5.01 11360.16 88.75 0.00 0.00 11253.01 4796.04 18588.39 00:15:50.189 =================================================================================================================== 00:15:50.189 Total : 11360.16 88.75 0.00 0.00 11253.01 4796.04 18588.39 00:15:50.189 [2024-05-14 02:13:04.641243] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.189 [2024-05-14 02:13:04.641277] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.189 2024/05/14 02:13:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.189 [2024-05-14 02:13:04.653218] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.189 [2024-05-14 02:13:04.653243] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.189 2024/05/14 02:13:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.189 [2024-05-14 02:13:04.665255] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.189 [2024-05-14 02:13:04.665291] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.189 2024/05/14 02:13:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.189 [2024-05-14 02:13:04.677260] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.189 [2024-05-14 02:13:04.677298] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.189 2024/05/14 02:13:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.189 [2024-05-14 02:13:04.689300] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.189 [2024-05-14 02:13:04.689348] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.189 2024/05/14 02:13:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.189 [2024-05-14 02:13:04.701269] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.189 [2024-05-14 02:13:04.701317] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.189 2024/05/14 02:13:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.189 [2024-05-14 02:13:04.713267] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.189 [2024-05-14 02:13:04.713306] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.189 2024/05/14 02:13:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.189 [2024-05-14 02:13:04.725230] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.189 [2024-05-14 02:13:04.725253] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.189 2024/05/14 02:13:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.189 [2024-05-14 02:13:04.737229] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.189 [2024-05-14 02:13:04.737251] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.189 2024/05/14 02:13:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.189 [2024-05-14 02:13:04.749235] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.189 [2024-05-14 02:13:04.749252] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.189 2024/05/14 02:13:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.189 [2024-05-14 02:13:04.761267] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.189 [2024-05-14 02:13:04.761299] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.189 2024/05/14 02:13:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.189 [2024-05-14 02:13:04.773260] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.189 [2024-05-14 02:13:04.773284] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.449 2024/05/14 02:13:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.449 [2024-05-14 02:13:04.785260] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.449 [2024-05-14 02:13:04.785285] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.449 2024/05/14 02:13:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.449 [2024-05-14 02:13:04.797306] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.449 [2024-05-14 02:13:04.797350] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.449 2024/05/14 02:13:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.449 [2024-05-14 02:13:04.809277] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.449 [2024-05-14 02:13:04.809307] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.449 2024/05/14 02:13:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.449 [2024-05-14 02:13:04.821256] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.449 [2024-05-14 02:13:04.821278] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.449 2024/05/14 02:13:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.449 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (73812) - No such process 00:15:50.449 02:13:04 -- target/zcopy.sh@49 -- # wait 73812 00:15:50.449 02:13:04 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:50.449 02:13:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:50.449 02:13:04 -- common/autotest_common.sh@10 -- # set +x 00:15:50.449 02:13:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:50.449 02:13:04 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:50.449 02:13:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:50.449 02:13:04 -- common/autotest_common.sh@10 -- # set +x 00:15:50.449 delay0 00:15:50.449 02:13:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:50.449 02:13:04 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:15:50.449 02:13:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:50.449 02:13:04 -- common/autotest_common.sh@10 -- # set +x 00:15:50.449 02:13:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:50.449 02:13:04 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:15:50.449 [2024-05-14 02:13:05.017980] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:15:57.061 Initializing NVMe Controllers 00:15:57.061 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:57.061 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:57.061 Initialization complete. Launching workers. 00:15:57.061 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 69 00:15:57.061 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 356, failed to submit 33 00:15:57.061 success 171, unsuccess 185, failed 0 00:15:57.061 02:13:11 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:15:57.061 02:13:11 -- target/zcopy.sh@60 -- # nvmftestfini 00:15:57.061 02:13:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:57.061 02:13:11 -- nvmf/common.sh@116 -- # sync 00:15:57.061 02:13:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:57.061 02:13:11 -- nvmf/common.sh@119 -- # set +e 00:15:57.061 02:13:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:57.061 02:13:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:57.061 rmmod nvme_tcp 00:15:57.061 rmmod nvme_fabrics 00:15:57.061 rmmod nvme_keyring 00:15:57.061 02:13:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:57.061 02:13:11 -- nvmf/common.sh@123 -- # set -e 00:15:57.061 02:13:11 -- nvmf/common.sh@124 -- # return 0 00:15:57.061 02:13:11 -- nvmf/common.sh@477 -- # '[' -n 73648 ']' 00:15:57.061 02:13:11 -- nvmf/common.sh@478 -- # killprocess 73648 00:15:57.061 02:13:11 -- common/autotest_common.sh@926 -- # '[' -z 73648 ']' 00:15:57.061 02:13:11 -- common/autotest_common.sh@930 -- # kill -0 73648 00:15:57.061 02:13:11 -- common/autotest_common.sh@931 -- # uname 00:15:57.061 02:13:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:57.061 02:13:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73648 00:15:57.061 02:13:11 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:15:57.061 02:13:11 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:15:57.061 killing process with pid 73648 00:15:57.061 02:13:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73648' 00:15:57.061 02:13:11 -- common/autotest_common.sh@945 -- # kill 73648 00:15:57.061 02:13:11 -- common/autotest_common.sh@950 -- # wait 73648 00:15:57.061 02:13:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:57.061 02:13:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:57.061 02:13:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:57.061 02:13:11 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:57.061 02:13:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:57.061 02:13:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:57.061 02:13:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:57.061 02:13:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:57.061 02:13:11 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:57.061 00:15:57.061 real 0m24.458s 00:15:57.061 user 0m39.627s 00:15:57.061 sys 0m6.376s 00:15:57.061 02:13:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:57.061 02:13:11 -- common/autotest_common.sh@10 -- # set +x 00:15:57.061 ************************************ 00:15:57.061 END TEST nvmf_zcopy 00:15:57.061 ************************************ 00:15:57.061 02:13:11 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:57.061 02:13:11 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:57.061 02:13:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:57.061 02:13:11 -- common/autotest_common.sh@10 -- # set +x 00:15:57.061 ************************************ 00:15:57.061 START TEST nvmf_nmic 00:15:57.061 ************************************ 00:15:57.061 02:13:11 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:57.061 * Looking for test storage... 00:15:57.061 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:57.061 02:13:11 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:57.061 02:13:11 -- nvmf/common.sh@7 -- # uname -s 00:15:57.061 02:13:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:57.061 02:13:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:57.061 02:13:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:57.061 02:13:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:57.061 02:13:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:57.061 02:13:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:57.061 02:13:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:57.061 02:13:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:57.061 02:13:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:57.061 02:13:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:57.061 02:13:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:15:57.061 02:13:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:15:57.061 02:13:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:57.061 02:13:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:57.061 02:13:11 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:57.061 02:13:11 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:57.061 02:13:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:57.061 02:13:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:57.062 02:13:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:57.062 02:13:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.062 02:13:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.062 02:13:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.062 02:13:11 -- paths/export.sh@5 -- # export PATH 00:15:57.062 02:13:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.062 02:13:11 -- nvmf/common.sh@46 -- # : 0 00:15:57.062 02:13:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:57.062 02:13:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:57.062 02:13:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:57.062 02:13:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:57.062 02:13:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:57.062 02:13:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:57.062 02:13:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:57.062 02:13:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:57.062 02:13:11 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:57.062 02:13:11 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:57.062 02:13:11 -- target/nmic.sh@14 -- # nvmftestinit 00:15:57.062 02:13:11 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:57.062 02:13:11 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:57.062 02:13:11 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:57.062 02:13:11 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:57.062 02:13:11 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:57.062 02:13:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:57.062 02:13:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:57.062 02:13:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:57.062 02:13:11 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:57.062 02:13:11 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:57.062 02:13:11 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:57.062 02:13:11 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:57.062 02:13:11 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:57.062 02:13:11 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:57.062 02:13:11 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:57.062 02:13:11 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:57.062 02:13:11 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:57.062 02:13:11 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:57.062 02:13:11 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:57.062 02:13:11 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:57.062 02:13:11 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:57.062 02:13:11 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:57.062 02:13:11 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:57.062 02:13:11 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:57.062 02:13:11 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:57.062 02:13:11 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:57.062 02:13:11 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:57.062 02:13:11 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:57.062 Cannot find device "nvmf_tgt_br" 00:15:57.062 02:13:11 -- nvmf/common.sh@154 -- # true 00:15:57.062 02:13:11 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:57.062 Cannot find device "nvmf_tgt_br2" 00:15:57.062 02:13:11 -- nvmf/common.sh@155 -- # true 00:15:57.062 02:13:11 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:57.062 02:13:11 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:57.062 Cannot find device "nvmf_tgt_br" 00:15:57.062 02:13:11 -- nvmf/common.sh@157 -- # true 00:15:57.062 02:13:11 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:57.062 Cannot find device "nvmf_tgt_br2" 00:15:57.062 02:13:11 -- nvmf/common.sh@158 -- # true 00:15:57.062 02:13:11 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:57.321 02:13:11 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:57.321 02:13:11 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:57.321 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:57.321 02:13:11 -- nvmf/common.sh@161 -- # true 00:15:57.321 02:13:11 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:57.321 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:57.321 02:13:11 -- nvmf/common.sh@162 -- # true 00:15:57.321 02:13:11 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:57.321 02:13:11 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:57.321 02:13:11 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:57.321 02:13:11 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:57.321 02:13:11 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:57.321 02:13:11 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:57.321 02:13:11 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:57.321 02:13:11 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:57.321 02:13:11 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:57.321 02:13:11 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:57.321 02:13:11 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:57.321 02:13:11 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:57.321 02:13:11 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:57.321 02:13:11 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:57.321 02:13:11 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:57.321 02:13:11 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:57.321 02:13:11 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:57.321 02:13:11 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:57.321 02:13:11 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:57.321 02:13:11 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:57.321 02:13:11 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:57.321 02:13:11 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:57.321 02:13:11 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:57.321 02:13:11 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:57.321 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:57.321 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:15:57.321 00:15:57.321 --- 10.0.0.2 ping statistics --- 00:15:57.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:57.321 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:15:57.321 02:13:11 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:57.321 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:57.321 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.035 ms 00:15:57.321 00:15:57.321 --- 10.0.0.3 ping statistics --- 00:15:57.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:57.321 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:15:57.321 02:13:11 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:57.321 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:57.321 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:15:57.321 00:15:57.321 --- 10.0.0.1 ping statistics --- 00:15:57.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:57.321 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:15:57.321 02:13:11 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:57.321 02:13:11 -- nvmf/common.sh@421 -- # return 0 00:15:57.321 02:13:11 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:57.321 02:13:11 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:57.321 02:13:11 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:57.321 02:13:11 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:57.321 02:13:11 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:57.321 02:13:11 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:57.321 02:13:11 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:57.321 02:13:11 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:15:57.321 02:13:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:57.321 02:13:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:57.321 02:13:11 -- common/autotest_common.sh@10 -- # set +x 00:15:57.580 02:13:11 -- nvmf/common.sh@469 -- # nvmfpid=74138 00:15:57.580 02:13:11 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:57.580 02:13:11 -- nvmf/common.sh@470 -- # waitforlisten 74138 00:15:57.580 02:13:11 -- common/autotest_common.sh@819 -- # '[' -z 74138 ']' 00:15:57.580 02:13:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:57.580 02:13:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:57.580 02:13:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:57.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:57.580 02:13:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:57.580 02:13:11 -- common/autotest_common.sh@10 -- # set +x 00:15:57.580 [2024-05-14 02:13:11.971475] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:15:57.580 [2024-05-14 02:13:11.971573] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:57.580 [2024-05-14 02:13:12.110075] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:57.837 [2024-05-14 02:13:12.179718] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:57.837 [2024-05-14 02:13:12.179935] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:57.837 [2024-05-14 02:13:12.179967] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:57.837 [2024-05-14 02:13:12.179986] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:57.837 [2024-05-14 02:13:12.180131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:57.837 [2024-05-14 02:13:12.180239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:57.837 [2024-05-14 02:13:12.180371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:57.837 [2024-05-14 02:13:12.180388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.770 02:13:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:58.770 02:13:13 -- common/autotest_common.sh@852 -- # return 0 00:15:58.770 02:13:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:58.770 02:13:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:58.770 02:13:13 -- common/autotest_common.sh@10 -- # set +x 00:15:58.770 02:13:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:58.770 02:13:13 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:58.770 02:13:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:58.770 02:13:13 -- common/autotest_common.sh@10 -- # set +x 00:15:58.770 [2024-05-14 02:13:13.094130] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:58.770 02:13:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:58.770 02:13:13 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:58.770 02:13:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:58.770 02:13:13 -- common/autotest_common.sh@10 -- # set +x 00:15:58.770 Malloc0 00:15:58.770 02:13:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:58.770 02:13:13 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:58.770 02:13:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:58.770 02:13:13 -- common/autotest_common.sh@10 -- # set +x 00:15:58.770 02:13:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:58.770 02:13:13 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:58.770 02:13:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:58.770 02:13:13 -- common/autotest_common.sh@10 -- # set +x 00:15:58.770 02:13:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:58.770 02:13:13 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:58.770 02:13:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:58.770 02:13:13 -- common/autotest_common.sh@10 -- # set +x 00:15:58.770 [2024-05-14 02:13:13.158040] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:58.770 02:13:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:58.770 test case1: single bdev can't be used in multiple subsystems 00:15:58.770 02:13:13 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:15:58.770 02:13:13 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:15:58.770 02:13:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:58.770 02:13:13 -- common/autotest_common.sh@10 -- # set +x 00:15:58.770 02:13:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:58.770 02:13:13 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:58.770 02:13:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:58.770 02:13:13 -- common/autotest_common.sh@10 -- # set +x 00:15:58.770 02:13:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:58.770 02:13:13 -- target/nmic.sh@28 -- # nmic_status=0 00:15:58.770 02:13:13 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:15:58.770 02:13:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:58.770 02:13:13 -- common/autotest_common.sh@10 -- # set +x 00:15:58.770 [2024-05-14 02:13:13.181881] bdev.c:7935:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:15:58.770 [2024-05-14 02:13:13.181940] subsystem.c:1779:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:15:58.770 [2024-05-14 02:13:13.181960] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.770 2024/05/14 02:13:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.770 request: 00:15:58.770 { 00:15:58.770 "method": "nvmf_subsystem_add_ns", 00:15:58.770 "params": { 00:15:58.770 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:15:58.770 "namespace": { 00:15:58.770 "bdev_name": "Malloc0" 00:15:58.770 } 00:15:58.770 } 00:15:58.770 } 00:15:58.770 Got JSON-RPC error response 00:15:58.770 GoRPCClient: error on JSON-RPC call 00:15:58.770 02:13:13 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:15:58.770 02:13:13 -- target/nmic.sh@29 -- # nmic_status=1 00:15:58.770 02:13:13 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:15:58.770 Adding namespace failed - expected result. 00:15:58.770 02:13:13 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:15:58.770 test case2: host connect to nvmf target in multiple paths 00:15:58.770 02:13:13 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:15:58.770 02:13:13 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:58.770 02:13:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:58.770 02:13:13 -- common/autotest_common.sh@10 -- # set +x 00:15:58.770 [2024-05-14 02:13:13.194028] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:58.770 02:13:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:58.770 02:13:13 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 --hostid=01bebc16-ee64-4b1b-82ac-462e1640a9a9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:59.028 02:13:13 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 --hostid=01bebc16-ee64-4b1b-82ac-462e1640a9a9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:15:59.028 02:13:13 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:15:59.028 02:13:13 -- common/autotest_common.sh@1177 -- # local i=0 00:15:59.028 02:13:13 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:15:59.028 02:13:13 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:15:59.028 02:13:13 -- common/autotest_common.sh@1184 -- # sleep 2 00:16:01.557 02:13:15 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:16:01.557 02:13:15 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:16:01.557 02:13:15 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:16:01.557 02:13:15 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:16:01.557 02:13:15 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:16:01.557 02:13:15 -- common/autotest_common.sh@1187 -- # return 0 00:16:01.557 02:13:15 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:01.557 [global] 00:16:01.557 thread=1 00:16:01.557 invalidate=1 00:16:01.557 rw=write 00:16:01.557 time_based=1 00:16:01.557 runtime=1 00:16:01.557 ioengine=libaio 00:16:01.557 direct=1 00:16:01.557 bs=4096 00:16:01.557 iodepth=1 00:16:01.557 norandommap=0 00:16:01.557 numjobs=1 00:16:01.557 00:16:01.557 verify_dump=1 00:16:01.557 verify_backlog=512 00:16:01.557 verify_state_save=0 00:16:01.557 do_verify=1 00:16:01.557 verify=crc32c-intel 00:16:01.557 [job0] 00:16:01.557 filename=/dev/nvme0n1 00:16:01.557 Could not set queue depth (nvme0n1) 00:16:01.558 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:01.558 fio-3.35 00:16:01.558 Starting 1 thread 00:16:02.491 00:16:02.491 job0: (groupid=0, jobs=1): err= 0: pid=74248: Tue May 14 02:13:16 2024 00:16:02.491 read: IOPS=3250, BW=12.7MiB/s (13.3MB/s)(12.7MiB/1001msec) 00:16:02.491 slat (nsec): min=13572, max=60136, avg=15960.81, stdev=3255.59 00:16:02.492 clat (usec): min=114, max=509, avg=146.77, stdev=11.83 00:16:02.492 lat (usec): min=144, max=524, avg=162.73, stdev=12.18 00:16:02.492 clat percentiles (usec): 00:16:02.492 | 1.00th=[ 133], 5.00th=[ 137], 10.00th=[ 137], 20.00th=[ 139], 00:16:02.492 | 30.00th=[ 141], 40.00th=[ 143], 50.00th=[ 145], 60.00th=[ 147], 00:16:02.492 | 70.00th=[ 149], 80.00th=[ 153], 90.00th=[ 159], 95.00th=[ 165], 00:16:02.492 | 99.00th=[ 178], 99.50th=[ 202], 99.90th=[ 227], 99.95th=[ 233], 00:16:02.492 | 99.99th=[ 510] 00:16:02.492 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:16:02.492 slat (nsec): min=19907, max=97995, avg=23100.39, stdev=4844.45 00:16:02.492 clat (usec): min=87, max=201, avg=104.97, stdev= 7.75 00:16:02.492 lat (usec): min=113, max=299, avg=128.07, stdev= 9.56 00:16:02.492 clat percentiles (usec): 00:16:02.492 | 1.00th=[ 95], 5.00th=[ 97], 10.00th=[ 98], 20.00th=[ 99], 00:16:02.492 | 30.00th=[ 101], 40.00th=[ 102], 50.00th=[ 103], 60.00th=[ 105], 00:16:02.492 | 70.00th=[ 106], 80.00th=[ 110], 90.00th=[ 116], 95.00th=[ 121], 00:16:02.492 | 99.00th=[ 130], 99.50th=[ 137], 99.90th=[ 155], 99.95th=[ 169], 00:16:02.492 | 99.99th=[ 202] 00:16:02.492 bw ( KiB/s): min=15168, max=15168, per=100.00%, avg=15168.00, stdev= 0.00, samples=1 00:16:02.492 iops : min= 3792, max= 3792, avg=3792.00, stdev= 0.00, samples=1 00:16:02.492 lat (usec) : 100=13.53%, 250=86.46%, 750=0.01% 00:16:02.492 cpu : usr=3.20%, sys=9.50%, ctx=6840, majf=0, minf=2 00:16:02.492 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:02.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:02.492 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:02.492 issued rwts: total=3254,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:02.492 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:02.492 00:16:02.492 Run status group 0 (all jobs): 00:16:02.492 READ: bw=12.7MiB/s (13.3MB/s), 12.7MiB/s-12.7MiB/s (13.3MB/s-13.3MB/s), io=12.7MiB (13.3MB), run=1001-1001msec 00:16:02.492 WRITE: bw=14.0MiB/s (14.7MB/s), 14.0MiB/s-14.0MiB/s (14.7MB/s-14.7MB/s), io=14.0MiB (14.7MB), run=1001-1001msec 00:16:02.492 00:16:02.492 Disk stats (read/write): 00:16:02.492 nvme0n1: ios=3089/3072, merge=0/0, ticks=467/347, in_queue=814, util=91.38% 00:16:02.492 02:13:16 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:02.492 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:02.492 02:13:16 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:02.492 02:13:16 -- common/autotest_common.sh@1198 -- # local i=0 00:16:02.492 02:13:16 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:16:02.492 02:13:16 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:02.492 02:13:16 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:16:02.492 02:13:16 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:02.492 02:13:16 -- common/autotest_common.sh@1210 -- # return 0 00:16:02.492 02:13:16 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:16:02.492 02:13:16 -- target/nmic.sh@53 -- # nvmftestfini 00:16:02.492 02:13:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:02.492 02:13:16 -- nvmf/common.sh@116 -- # sync 00:16:02.492 02:13:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:02.492 02:13:16 -- nvmf/common.sh@119 -- # set +e 00:16:02.492 02:13:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:02.492 02:13:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:02.492 rmmod nvme_tcp 00:16:02.492 rmmod nvme_fabrics 00:16:02.492 rmmod nvme_keyring 00:16:02.492 02:13:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:02.492 02:13:17 -- nvmf/common.sh@123 -- # set -e 00:16:02.492 02:13:17 -- nvmf/common.sh@124 -- # return 0 00:16:02.492 02:13:17 -- nvmf/common.sh@477 -- # '[' -n 74138 ']' 00:16:02.492 02:13:17 -- nvmf/common.sh@478 -- # killprocess 74138 00:16:02.492 02:13:17 -- common/autotest_common.sh@926 -- # '[' -z 74138 ']' 00:16:02.492 02:13:17 -- common/autotest_common.sh@930 -- # kill -0 74138 00:16:02.492 02:13:17 -- common/autotest_common.sh@931 -- # uname 00:16:02.492 02:13:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:02.492 02:13:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 74138 00:16:02.492 02:13:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:02.492 02:13:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:02.492 killing process with pid 74138 00:16:02.492 02:13:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 74138' 00:16:02.492 02:13:17 -- common/autotest_common.sh@945 -- # kill 74138 00:16:02.492 02:13:17 -- common/autotest_common.sh@950 -- # wait 74138 00:16:02.750 02:13:17 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:02.750 02:13:17 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:02.750 02:13:17 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:02.750 02:13:17 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:02.750 02:13:17 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:02.750 02:13:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:02.750 02:13:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:02.750 02:13:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:02.750 02:13:17 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:02.750 ************************************ 00:16:02.750 END TEST nvmf_nmic 00:16:02.750 ************************************ 00:16:02.750 00:16:02.750 real 0m5.834s 00:16:02.750 user 0m20.011s 00:16:02.750 sys 0m1.287s 00:16:02.750 02:13:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:02.750 02:13:17 -- common/autotest_common.sh@10 -- # set +x 00:16:03.009 02:13:17 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:03.009 02:13:17 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:03.009 02:13:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:03.009 02:13:17 -- common/autotest_common.sh@10 -- # set +x 00:16:03.009 ************************************ 00:16:03.009 START TEST nvmf_fio_target 00:16:03.009 ************************************ 00:16:03.009 02:13:17 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:03.009 * Looking for test storage... 00:16:03.009 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:03.009 02:13:17 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:03.009 02:13:17 -- nvmf/common.sh@7 -- # uname -s 00:16:03.009 02:13:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:03.009 02:13:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:03.009 02:13:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:03.009 02:13:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:03.009 02:13:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:03.010 02:13:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:03.010 02:13:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:03.010 02:13:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:03.010 02:13:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:03.010 02:13:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:03.010 02:13:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:16:03.010 02:13:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:16:03.010 02:13:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:03.010 02:13:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:03.010 02:13:17 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:03.010 02:13:17 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:03.010 02:13:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:03.010 02:13:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:03.010 02:13:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:03.010 02:13:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.010 02:13:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.010 02:13:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.010 02:13:17 -- paths/export.sh@5 -- # export PATH 00:16:03.010 02:13:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.010 02:13:17 -- nvmf/common.sh@46 -- # : 0 00:16:03.010 02:13:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:03.010 02:13:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:03.010 02:13:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:03.010 02:13:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:03.010 02:13:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:03.010 02:13:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:03.010 02:13:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:03.010 02:13:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:03.010 02:13:17 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:03.010 02:13:17 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:03.010 02:13:17 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:03.010 02:13:17 -- target/fio.sh@16 -- # nvmftestinit 00:16:03.010 02:13:17 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:03.010 02:13:17 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:03.010 02:13:17 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:03.010 02:13:17 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:03.010 02:13:17 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:03.010 02:13:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:03.010 02:13:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:03.010 02:13:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:03.010 02:13:17 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:03.010 02:13:17 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:03.010 02:13:17 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:03.010 02:13:17 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:03.010 02:13:17 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:03.010 02:13:17 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:03.010 02:13:17 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:03.010 02:13:17 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:03.010 02:13:17 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:03.010 02:13:17 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:03.010 02:13:17 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:03.010 02:13:17 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:03.010 02:13:17 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:03.010 02:13:17 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:03.010 02:13:17 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:03.010 02:13:17 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:03.010 02:13:17 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:03.010 02:13:17 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:03.010 02:13:17 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:03.010 02:13:17 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:03.010 Cannot find device "nvmf_tgt_br" 00:16:03.010 02:13:17 -- nvmf/common.sh@154 -- # true 00:16:03.010 02:13:17 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:03.010 Cannot find device "nvmf_tgt_br2" 00:16:03.010 02:13:17 -- nvmf/common.sh@155 -- # true 00:16:03.010 02:13:17 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:03.010 02:13:17 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:03.010 Cannot find device "nvmf_tgt_br" 00:16:03.010 02:13:17 -- nvmf/common.sh@157 -- # true 00:16:03.010 02:13:17 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:03.010 Cannot find device "nvmf_tgt_br2" 00:16:03.010 02:13:17 -- nvmf/common.sh@158 -- # true 00:16:03.010 02:13:17 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:03.010 02:13:17 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:03.010 02:13:17 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:03.010 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:03.010 02:13:17 -- nvmf/common.sh@161 -- # true 00:16:03.010 02:13:17 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:03.010 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:03.010 02:13:17 -- nvmf/common.sh@162 -- # true 00:16:03.010 02:13:17 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:03.010 02:13:17 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:03.269 02:13:17 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:03.269 02:13:17 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:03.269 02:13:17 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:03.269 02:13:17 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:03.269 02:13:17 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:03.269 02:13:17 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:03.269 02:13:17 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:03.269 02:13:17 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:03.269 02:13:17 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:03.269 02:13:17 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:03.269 02:13:17 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:03.269 02:13:17 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:03.269 02:13:17 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:03.269 02:13:17 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:03.269 02:13:17 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:03.269 02:13:17 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:03.269 02:13:17 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:03.269 02:13:17 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:03.269 02:13:17 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:03.269 02:13:17 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:03.269 02:13:17 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:03.269 02:13:17 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:03.269 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:03.270 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:16:03.270 00:16:03.270 --- 10.0.0.2 ping statistics --- 00:16:03.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:03.270 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:16:03.270 02:13:17 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:03.270 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:03.270 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:16:03.270 00:16:03.270 --- 10.0.0.3 ping statistics --- 00:16:03.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:03.270 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:16:03.270 02:13:17 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:03.270 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:03.270 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:16:03.270 00:16:03.270 --- 10.0.0.1 ping statistics --- 00:16:03.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:03.270 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:16:03.270 02:13:17 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:03.270 02:13:17 -- nvmf/common.sh@421 -- # return 0 00:16:03.270 02:13:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:03.270 02:13:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:03.270 02:13:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:03.270 02:13:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:03.270 02:13:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:03.270 02:13:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:03.270 02:13:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:03.270 02:13:17 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:16:03.270 02:13:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:03.270 02:13:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:03.270 02:13:17 -- common/autotest_common.sh@10 -- # set +x 00:16:03.270 02:13:17 -- nvmf/common.sh@469 -- # nvmfpid=74424 00:16:03.270 02:13:17 -- nvmf/common.sh@470 -- # waitforlisten 74424 00:16:03.270 02:13:17 -- common/autotest_common.sh@819 -- # '[' -z 74424 ']' 00:16:03.270 02:13:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:03.270 02:13:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:03.270 02:13:17 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:03.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:03.270 02:13:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:03.270 02:13:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:03.270 02:13:17 -- common/autotest_common.sh@10 -- # set +x 00:16:03.528 [2024-05-14 02:13:17.876114] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:16:03.528 [2024-05-14 02:13:17.876209] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:03.528 [2024-05-14 02:13:18.017608] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:03.528 [2024-05-14 02:13:18.075665] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:03.528 [2024-05-14 02:13:18.075824] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:03.528 [2024-05-14 02:13:18.075838] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:03.528 [2024-05-14 02:13:18.075847] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:03.528 [2024-05-14 02:13:18.075975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:03.528 [2024-05-14 02:13:18.076108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:03.528 [2024-05-14 02:13:18.076246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:03.528 [2024-05-14 02:13:18.076247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:04.462 02:13:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:04.462 02:13:18 -- common/autotest_common.sh@852 -- # return 0 00:16:04.462 02:13:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:04.462 02:13:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:04.462 02:13:18 -- common/autotest_common.sh@10 -- # set +x 00:16:04.462 02:13:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:04.462 02:13:18 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:04.719 [2024-05-14 02:13:19.161469] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:04.719 02:13:19 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:04.977 02:13:19 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:16:04.977 02:13:19 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:05.235 02:13:19 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:16:05.235 02:13:19 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:05.493 02:13:20 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:16:05.493 02:13:20 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:05.751 02:13:20 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:16:05.751 02:13:20 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:16:06.317 02:13:20 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:06.317 02:13:20 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:16:06.317 02:13:20 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:06.576 02:13:21 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:16:06.576 02:13:21 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:06.835 02:13:21 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:16:06.835 02:13:21 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:16:07.093 02:13:21 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:07.352 02:13:21 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:07.352 02:13:21 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:07.610 02:13:21 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:07.610 02:13:21 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:07.869 02:13:22 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:08.127 [2024-05-14 02:13:22.464978] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:08.127 02:13:22 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:16:08.386 02:13:22 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:16:08.644 02:13:23 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 --hostid=01bebc16-ee64-4b1b-82ac-462e1640a9a9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:08.644 02:13:23 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:16:08.644 02:13:23 -- common/autotest_common.sh@1177 -- # local i=0 00:16:08.644 02:13:23 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:16:08.644 02:13:23 -- common/autotest_common.sh@1179 -- # [[ -n 4 ]] 00:16:08.644 02:13:23 -- common/autotest_common.sh@1180 -- # nvme_device_counter=4 00:16:08.644 02:13:23 -- common/autotest_common.sh@1184 -- # sleep 2 00:16:11.172 02:13:25 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:16:11.172 02:13:25 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:16:11.172 02:13:25 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:16:11.172 02:13:25 -- common/autotest_common.sh@1186 -- # nvme_devices=4 00:16:11.172 02:13:25 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:16:11.172 02:13:25 -- common/autotest_common.sh@1187 -- # return 0 00:16:11.172 02:13:25 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:11.172 [global] 00:16:11.172 thread=1 00:16:11.172 invalidate=1 00:16:11.172 rw=write 00:16:11.172 time_based=1 00:16:11.172 runtime=1 00:16:11.172 ioengine=libaio 00:16:11.172 direct=1 00:16:11.172 bs=4096 00:16:11.172 iodepth=1 00:16:11.172 norandommap=0 00:16:11.172 numjobs=1 00:16:11.172 00:16:11.172 verify_dump=1 00:16:11.172 verify_backlog=512 00:16:11.172 verify_state_save=0 00:16:11.172 do_verify=1 00:16:11.172 verify=crc32c-intel 00:16:11.172 [job0] 00:16:11.172 filename=/dev/nvme0n1 00:16:11.172 [job1] 00:16:11.172 filename=/dev/nvme0n2 00:16:11.172 [job2] 00:16:11.172 filename=/dev/nvme0n3 00:16:11.172 [job3] 00:16:11.172 filename=/dev/nvme0n4 00:16:11.172 Could not set queue depth (nvme0n1) 00:16:11.172 Could not set queue depth (nvme0n2) 00:16:11.172 Could not set queue depth (nvme0n3) 00:16:11.172 Could not set queue depth (nvme0n4) 00:16:11.173 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:11.173 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:11.173 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:11.173 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:11.173 fio-3.35 00:16:11.173 Starting 4 threads 00:16:12.106 00:16:12.106 job0: (groupid=0, jobs=1): err= 0: pid=74716: Tue May 14 02:13:26 2024 00:16:12.106 read: IOPS=2815, BW=11.0MiB/s (11.5MB/s)(11.0MiB/1001msec) 00:16:12.106 slat (nsec): min=13967, max=63520, avg=18762.85, stdev=4539.65 00:16:12.106 clat (usec): min=133, max=897, avg=162.57, stdev=22.50 00:16:12.106 lat (usec): min=148, max=922, avg=181.33, stdev=23.51 00:16:12.106 clat percentiles (usec): 00:16:12.106 | 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 153], 00:16:12.106 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 163], 00:16:12.106 | 70.00th=[ 167], 80.00th=[ 169], 90.00th=[ 176], 95.00th=[ 182], 00:16:12.106 | 99.00th=[ 223], 99.50th=[ 243], 99.90th=[ 429], 99.95th=[ 685], 00:16:12.106 | 99.99th=[ 898] 00:16:12.106 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:16:12.106 slat (usec): min=20, max=168, avg=27.54, stdev= 7.17 00:16:12.106 clat (usec): min=103, max=714, avg=127.76, stdev=14.71 00:16:12.106 lat (usec): min=126, max=747, avg=155.30, stdev=16.74 00:16:12.106 clat percentiles (usec): 00:16:12.106 | 1.00th=[ 109], 5.00th=[ 114], 10.00th=[ 116], 20.00th=[ 120], 00:16:12.107 | 30.00th=[ 123], 40.00th=[ 125], 50.00th=[ 127], 60.00th=[ 130], 00:16:12.107 | 70.00th=[ 133], 80.00th=[ 135], 90.00th=[ 141], 95.00th=[ 145], 00:16:12.107 | 99.00th=[ 157], 99.50th=[ 163], 99.90th=[ 184], 99.95th=[ 247], 00:16:12.107 | 99.99th=[ 717] 00:16:12.107 bw ( KiB/s): min=12288, max=12288, per=30.03%, avg=12288.00, stdev= 0.00, samples=1 00:16:12.107 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:16:12.107 lat (usec) : 250=99.85%, 500=0.10%, 750=0.03%, 1000=0.02% 00:16:12.107 cpu : usr=2.70%, sys=10.20%, ctx=5891, majf=0, minf=4 00:16:12.107 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:12.107 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.107 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.107 issued rwts: total=2818,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:12.107 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:12.107 job1: (groupid=0, jobs=1): err= 0: pid=74717: Tue May 14 02:13:26 2024 00:16:12.107 read: IOPS=1695, BW=6781KiB/s (6944kB/s)(6788KiB/1001msec) 00:16:12.107 slat (usec): min=11, max=3033, avg=17.79, stdev=73.41 00:16:12.107 clat (usec): min=48, max=3372, avg=274.95, stdev=96.90 00:16:12.107 lat (usec): min=241, max=3384, avg=292.74, stdev=118.03 00:16:12.107 clat percentiles (usec): 00:16:12.107 | 1.00th=[ 239], 5.00th=[ 247], 10.00th=[ 251], 20.00th=[ 258], 00:16:12.107 | 30.00th=[ 265], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 273], 00:16:12.107 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 293], 95.00th=[ 297], 00:16:12.107 | 99.00th=[ 326], 99.50th=[ 363], 99.90th=[ 2114], 99.95th=[ 3359], 00:16:12.107 | 99.99th=[ 3359] 00:16:12.107 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:16:12.107 slat (usec): min=17, max=102, avg=25.56, stdev= 4.61 00:16:12.107 clat (usec): min=94, max=372, avg=216.29, stdev=24.85 00:16:12.107 lat (usec): min=117, max=416, avg=241.86, stdev=25.04 00:16:12.107 clat percentiles (usec): 00:16:12.107 | 1.00th=[ 115], 5.00th=[ 192], 10.00th=[ 198], 20.00th=[ 202], 00:16:12.107 | 30.00th=[ 208], 40.00th=[ 212], 50.00th=[ 217], 60.00th=[ 221], 00:16:12.107 | 70.00th=[ 225], 80.00th=[ 231], 90.00th=[ 239], 95.00th=[ 247], 00:16:12.107 | 99.00th=[ 318], 99.50th=[ 347], 99.90th=[ 363], 99.95th=[ 367], 00:16:12.107 | 99.99th=[ 375] 00:16:12.107 bw ( KiB/s): min= 8192, max= 8192, per=20.02%, avg=8192.00, stdev= 0.00, samples=1 00:16:12.107 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:12.107 lat (usec) : 50=0.03%, 100=0.08%, 250=55.81%, 500=43.98% 00:16:12.107 lat (msec) : 2=0.05%, 4=0.05% 00:16:12.107 cpu : usr=2.30%, sys=5.80%, ctx=3751, majf=0, minf=9 00:16:12.107 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:12.107 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.107 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.107 issued rwts: total=1697,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:12.107 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:12.107 job2: (groupid=0, jobs=1): err= 0: pid=74718: Tue May 14 02:13:26 2024 00:16:12.107 read: IOPS=2658, BW=10.4MiB/s (10.9MB/s)(10.4MiB/1001msec) 00:16:12.107 slat (usec): min=14, max=228, avg=17.22, stdev= 5.74 00:16:12.107 clat (usec): min=142, max=500, avg=170.52, stdev=18.17 00:16:12.107 lat (usec): min=157, max=517, avg=187.74, stdev=19.23 00:16:12.107 clat percentiles (usec): 00:16:12.107 | 1.00th=[ 151], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 161], 00:16:12.107 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 167], 60.00th=[ 172], 00:16:12.107 | 70.00th=[ 174], 80.00th=[ 180], 90.00th=[ 186], 95.00th=[ 192], 00:16:12.107 | 99.00th=[ 225], 99.50th=[ 245], 99.90th=[ 461], 99.95th=[ 465], 00:16:12.107 | 99.99th=[ 502] 00:16:12.107 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:16:12.107 slat (usec): min=19, max=322, avg=24.10, stdev= 6.78 00:16:12.107 clat (usec): min=3, max=342, avg=135.31, stdev=13.98 00:16:12.107 lat (usec): min=131, max=369, avg=159.40, stdev=15.36 00:16:12.107 clat percentiles (usec): 00:16:12.107 | 1.00th=[ 117], 5.00th=[ 120], 10.00th=[ 123], 20.00th=[ 126], 00:16:12.107 | 30.00th=[ 129], 40.00th=[ 131], 50.00th=[ 135], 60.00th=[ 137], 00:16:12.107 | 70.00th=[ 139], 80.00th=[ 143], 90.00th=[ 149], 95.00th=[ 155], 00:16:12.107 | 99.00th=[ 180], 99.50th=[ 194], 99.90th=[ 255], 99.95th=[ 330], 00:16:12.107 | 99.99th=[ 343] 00:16:12.107 bw ( KiB/s): min=12288, max=12288, per=30.03%, avg=12288.00, stdev= 0.00, samples=1 00:16:12.107 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:16:12.107 lat (usec) : 4=0.02%, 250=99.70%, 500=0.26%, 750=0.02% 00:16:12.107 cpu : usr=2.20%, sys=9.20%, ctx=5735, majf=0, minf=13 00:16:12.107 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:12.107 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.107 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.107 issued rwts: total=2661,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:12.107 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:12.107 job3: (groupid=0, jobs=1): err= 0: pid=74719: Tue May 14 02:13:26 2024 00:16:12.107 read: IOPS=1685, BW=6741KiB/s (6903kB/s)(6748KiB/1001msec) 00:16:12.107 slat (usec): min=9, max=101, avg=17.05, stdev= 3.72 00:16:12.107 clat (usec): min=156, max=3366, avg=275.81, stdev=122.02 00:16:12.107 lat (usec): min=180, max=3380, avg=292.86, stdev=121.93 00:16:12.107 clat percentiles (usec): 00:16:12.107 | 1.00th=[ 237], 5.00th=[ 247], 10.00th=[ 251], 20.00th=[ 258], 00:16:12.107 | 30.00th=[ 262], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 273], 00:16:12.107 | 70.00th=[ 277], 80.00th=[ 281], 90.00th=[ 289], 95.00th=[ 297], 00:16:12.107 | 99.00th=[ 322], 99.50th=[ 334], 99.90th=[ 3326], 99.95th=[ 3359], 00:16:12.107 | 99.99th=[ 3359] 00:16:12.107 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:16:12.107 slat (usec): min=16, max=159, avg=25.70, stdev= 5.16 00:16:12.107 clat (usec): min=101, max=386, avg=217.73, stdev=25.00 00:16:12.107 lat (usec): min=133, max=408, avg=243.43, stdev=24.77 00:16:12.107 clat percentiles (usec): 00:16:12.107 | 1.00th=[ 133], 5.00th=[ 194], 10.00th=[ 198], 20.00th=[ 204], 00:16:12.107 | 30.00th=[ 208], 40.00th=[ 212], 50.00th=[ 217], 60.00th=[ 219], 00:16:12.107 | 70.00th=[ 225], 80.00th=[ 229], 90.00th=[ 239], 95.00th=[ 249], 00:16:12.107 | 99.00th=[ 343], 99.50th=[ 355], 99.90th=[ 379], 99.95th=[ 379], 00:16:12.107 | 99.99th=[ 388] 00:16:12.107 bw ( KiB/s): min= 8208, max= 8208, per=20.06%, avg=8208.00, stdev= 0.00, samples=1 00:16:12.107 iops : min= 2052, max= 2052, avg=2052.00, stdev= 0.00, samples=1 00:16:12.107 lat (usec) : 250=56.33%, 500=43.53% 00:16:12.107 lat (msec) : 2=0.05%, 4=0.08% 00:16:12.107 cpu : usr=1.70%, sys=6.00%, ctx=3736, majf=0, minf=9 00:16:12.107 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:12.107 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.107 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.107 issued rwts: total=1687,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:12.107 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:12.107 00:16:12.107 Run status group 0 (all jobs): 00:16:12.107 READ: bw=34.6MiB/s (36.3MB/s), 6741KiB/s-11.0MiB/s (6903kB/s-11.5MB/s), io=34.6MiB (36.3MB), run=1001-1001msec 00:16:12.107 WRITE: bw=40.0MiB/s (41.9MB/s), 8184KiB/s-12.0MiB/s (8380kB/s-12.6MB/s), io=40.0MiB (41.9MB), run=1001-1001msec 00:16:12.107 00:16:12.107 Disk stats (read/write): 00:16:12.107 nvme0n1: ios=2569/2560, merge=0/0, ticks=450/360, in_queue=810, util=88.57% 00:16:12.107 nvme0n2: ios=1585/1707, merge=0/0, ticks=490/377, in_queue=867, util=94.14% 00:16:12.107 nvme0n3: ios=2441/2560, merge=0/0, ticks=430/372, in_queue=802, util=89.84% 00:16:12.107 nvme0n4: ios=1575/1694, merge=0/0, ticks=497/380, in_queue=877, util=93.70% 00:16:12.107 02:13:26 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:16:12.107 [global] 00:16:12.107 thread=1 00:16:12.107 invalidate=1 00:16:12.107 rw=randwrite 00:16:12.107 time_based=1 00:16:12.107 runtime=1 00:16:12.107 ioengine=libaio 00:16:12.107 direct=1 00:16:12.107 bs=4096 00:16:12.107 iodepth=1 00:16:12.107 norandommap=0 00:16:12.107 numjobs=1 00:16:12.107 00:16:12.107 verify_dump=1 00:16:12.107 verify_backlog=512 00:16:12.107 verify_state_save=0 00:16:12.107 do_verify=1 00:16:12.107 verify=crc32c-intel 00:16:12.107 [job0] 00:16:12.107 filename=/dev/nvme0n1 00:16:12.107 [job1] 00:16:12.107 filename=/dev/nvme0n2 00:16:12.107 [job2] 00:16:12.107 filename=/dev/nvme0n3 00:16:12.107 [job3] 00:16:12.107 filename=/dev/nvme0n4 00:16:12.107 Could not set queue depth (nvme0n1) 00:16:12.107 Could not set queue depth (nvme0n2) 00:16:12.107 Could not set queue depth (nvme0n3) 00:16:12.107 Could not set queue depth (nvme0n4) 00:16:12.366 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:12.366 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:12.366 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:12.366 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:12.366 fio-3.35 00:16:12.366 Starting 4 threads 00:16:13.742 00:16:13.742 job0: (groupid=0, jobs=1): err= 0: pid=74778: Tue May 14 02:13:27 2024 00:16:13.742 read: IOPS=2797, BW=10.9MiB/s (11.5MB/s)(10.9MiB/1001msec) 00:16:13.742 slat (usec): min=13, max=109, avg=17.24, stdev= 4.68 00:16:13.742 clat (usec): min=133, max=1109, avg=159.89, stdev=26.31 00:16:13.742 lat (usec): min=147, max=1125, avg=177.13, stdev=26.99 00:16:13.742 clat percentiles (usec): 00:16:13.742 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 147], 20.00th=[ 151], 00:16:13.742 | 30.00th=[ 153], 40.00th=[ 155], 50.00th=[ 157], 60.00th=[ 159], 00:16:13.742 | 70.00th=[ 163], 80.00th=[ 167], 90.00th=[ 174], 95.00th=[ 180], 00:16:13.742 | 99.00th=[ 198], 99.50th=[ 215], 99.90th=[ 611], 99.95th=[ 644], 00:16:13.742 | 99.99th=[ 1106] 00:16:13.742 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:16:13.742 slat (usec): min=19, max=941, avg=25.69, stdev=19.10 00:16:13.742 clat (usec): min=3, max=1514, avg=134.68, stdev=41.92 00:16:13.742 lat (usec): min=123, max=1536, avg=160.37, stdev=46.86 00:16:13.742 clat percentiles (usec): 00:16:13.742 | 1.00th=[ 111], 5.00th=[ 117], 10.00th=[ 120], 20.00th=[ 124], 00:16:13.743 | 30.00th=[ 126], 40.00th=[ 129], 50.00th=[ 131], 60.00th=[ 135], 00:16:13.743 | 70.00th=[ 137], 80.00th=[ 141], 90.00th=[ 147], 95.00th=[ 155], 00:16:13.743 | 99.00th=[ 196], 99.50th=[ 269], 99.90th=[ 750], 99.95th=[ 807], 00:16:13.743 | 99.99th=[ 1516] 00:16:13.743 bw ( KiB/s): min=12288, max=12288, per=25.03%, avg=12288.00, stdev= 0.00, samples=1 00:16:13.743 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:16:13.743 lat (usec) : 4=0.03%, 250=99.51%, 500=0.22%, 750=0.15%, 1000=0.05% 00:16:13.743 lat (msec) : 2=0.03% 00:16:13.743 cpu : usr=2.90%, sys=9.00%, ctx=5879, majf=0, minf=10 00:16:13.743 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:13.743 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:13.743 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:13.743 issued rwts: total=2800,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:13.743 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:13.743 job1: (groupid=0, jobs=1): err= 0: pid=74779: Tue May 14 02:13:27 2024 00:16:13.743 read: IOPS=2832, BW=11.1MiB/s (11.6MB/s)(11.1MiB/1001msec) 00:16:13.743 slat (nsec): min=13092, max=53862, avg=16071.45, stdev=3065.83 00:16:13.743 clat (usec): min=123, max=859, avg=159.61, stdev=20.08 00:16:13.743 lat (usec): min=147, max=884, avg=175.68, stdev=20.51 00:16:13.743 clat percentiles (usec): 00:16:13.743 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 151], 00:16:13.743 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 161], 00:16:13.743 | 70.00th=[ 163], 80.00th=[ 167], 90.00th=[ 172], 95.00th=[ 178], 00:16:13.743 | 99.00th=[ 190], 99.50th=[ 215], 99.90th=[ 416], 99.95th=[ 603], 00:16:13.743 | 99.99th=[ 857] 00:16:13.743 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:16:13.743 slat (usec): min=19, max=156, avg=27.08, stdev= 9.54 00:16:13.743 clat (usec): min=41, max=423, avg=132.61, stdev=14.22 00:16:13.743 lat (usec): min=127, max=450, avg=159.69, stdev=17.66 00:16:13.743 clat percentiles (usec): 00:16:13.743 | 1.00th=[ 110], 5.00th=[ 118], 10.00th=[ 121], 20.00th=[ 124], 00:16:13.743 | 30.00th=[ 127], 40.00th=[ 129], 50.00th=[ 133], 60.00th=[ 135], 00:16:13.743 | 70.00th=[ 137], 80.00th=[ 141], 90.00th=[ 147], 95.00th=[ 153], 00:16:13.743 | 99.00th=[ 176], 99.50th=[ 184], 99.90th=[ 258], 99.95th=[ 306], 00:16:13.743 | 99.99th=[ 424] 00:16:13.743 bw ( KiB/s): min=12288, max=12288, per=25.03%, avg=12288.00, stdev= 0.00, samples=1 00:16:13.743 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:16:13.743 lat (usec) : 50=0.05%, 100=0.12%, 250=99.64%, 500=0.15%, 750=0.02% 00:16:13.743 lat (usec) : 1000=0.02% 00:16:13.743 cpu : usr=2.70%, sys=9.40%, ctx=5938, majf=0, minf=9 00:16:13.743 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:13.743 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:13.743 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:13.743 issued rwts: total=2835,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:13.743 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:13.743 job2: (groupid=0, jobs=1): err= 0: pid=74780: Tue May 14 02:13:27 2024 00:16:13.743 read: IOPS=2712, BW=10.6MiB/s (11.1MB/s)(10.6MiB/1001msec) 00:16:13.743 slat (nsec): min=13493, max=45222, avg=15865.25, stdev=2210.06 00:16:13.743 clat (usec): min=138, max=1078, avg=165.40, stdev=25.92 00:16:13.743 lat (usec): min=154, max=1093, avg=181.26, stdev=25.99 00:16:13.743 clat percentiles (usec): 00:16:13.743 | 1.00th=[ 147], 5.00th=[ 151], 10.00th=[ 153], 20.00th=[ 157], 00:16:13.743 | 30.00th=[ 159], 40.00th=[ 161], 50.00th=[ 163], 60.00th=[ 165], 00:16:13.743 | 70.00th=[ 169], 80.00th=[ 174], 90.00th=[ 178], 95.00th=[ 184], 00:16:13.743 | 99.00th=[ 194], 99.50th=[ 198], 99.90th=[ 611], 99.95th=[ 766], 00:16:13.743 | 99.99th=[ 1074] 00:16:13.743 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:16:13.743 slat (usec): min=19, max=103, avg=23.07, stdev= 4.18 00:16:13.743 clat (usec): min=110, max=975, avg=139.15, stdev=18.92 00:16:13.743 lat (usec): min=133, max=1002, avg=162.22, stdev=19.61 00:16:13.743 clat percentiles (usec): 00:16:13.743 | 1.00th=[ 118], 5.00th=[ 124], 10.00th=[ 127], 20.00th=[ 131], 00:16:13.743 | 30.00th=[ 135], 40.00th=[ 137], 50.00th=[ 139], 60.00th=[ 141], 00:16:13.743 | 70.00th=[ 143], 80.00th=[ 147], 90.00th=[ 153], 95.00th=[ 157], 00:16:13.743 | 99.00th=[ 169], 99.50th=[ 174], 99.90th=[ 255], 99.95th=[ 302], 00:16:13.743 | 99.99th=[ 979] 00:16:13.743 bw ( KiB/s): min=12288, max=12288, per=25.03%, avg=12288.00, stdev= 0.00, samples=1 00:16:13.743 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:16:13.743 lat (usec) : 250=99.84%, 500=0.07%, 750=0.03%, 1000=0.03% 00:16:13.743 lat (msec) : 2=0.02% 00:16:13.743 cpu : usr=2.70%, sys=7.90%, ctx=5787, majf=0, minf=15 00:16:13.743 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:13.743 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:13.743 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:13.743 issued rwts: total=2715,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:13.743 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:13.743 job3: (groupid=0, jobs=1): err= 0: pid=74781: Tue May 14 02:13:27 2024 00:16:13.743 read: IOPS=2636, BW=10.3MiB/s (10.8MB/s)(10.3MiB/1001msec) 00:16:13.743 slat (nsec): min=12984, max=46468, avg=15462.69, stdev=2962.83 00:16:13.743 clat (usec): min=142, max=2356, avg=168.57, stdev=48.01 00:16:13.743 lat (usec): min=156, max=2370, avg=184.03, stdev=48.22 00:16:13.743 clat percentiles (usec): 00:16:13.743 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 155], 20.00th=[ 157], 00:16:13.743 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 169], 00:16:13.743 | 70.00th=[ 172], 80.00th=[ 176], 90.00th=[ 184], 95.00th=[ 190], 00:16:13.743 | 99.00th=[ 202], 99.50th=[ 212], 99.90th=[ 685], 99.95th=[ 963], 00:16:13.743 | 99.99th=[ 2343] 00:16:13.743 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:16:13.743 slat (nsec): min=19315, max=93473, avg=22436.44, stdev=3793.77 00:16:13.743 clat (usec): min=109, max=2719, avg=141.92, stdev=51.02 00:16:13.743 lat (usec): min=130, max=2755, avg=164.36, stdev=51.55 00:16:13.743 clat percentiles (usec): 00:16:13.743 | 1.00th=[ 120], 5.00th=[ 125], 10.00th=[ 128], 20.00th=[ 133], 00:16:13.743 | 30.00th=[ 135], 40.00th=[ 137], 50.00th=[ 139], 60.00th=[ 143], 00:16:13.743 | 70.00th=[ 145], 80.00th=[ 149], 90.00th=[ 155], 95.00th=[ 161], 00:16:13.743 | 99.00th=[ 176], 99.50th=[ 186], 99.90th=[ 277], 99.95th=[ 971], 00:16:13.743 | 99.99th=[ 2704] 00:16:13.743 bw ( KiB/s): min=12288, max=12288, per=25.03%, avg=12288.00, stdev= 0.00, samples=1 00:16:13.743 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:16:13.743 lat (usec) : 250=99.82%, 500=0.07%, 750=0.04%, 1000=0.04% 00:16:13.743 lat (msec) : 4=0.04% 00:16:13.743 cpu : usr=2.20%, sys=8.00%, ctx=5711, majf=0, minf=11 00:16:13.743 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:13.743 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:13.743 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:13.743 issued rwts: total=2639,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:13.743 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:13.743 00:16:13.743 Run status group 0 (all jobs): 00:16:13.743 READ: bw=42.9MiB/s (45.0MB/s), 10.3MiB/s-11.1MiB/s (10.8MB/s-11.6MB/s), io=42.9MiB (45.0MB), run=1001-1001msec 00:16:13.743 WRITE: bw=48.0MiB/s (50.3MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=48.0MiB (50.3MB), run=1001-1001msec 00:16:13.743 00:16:13.743 Disk stats (read/write): 00:16:13.743 nvme0n1: ios=2559/2560, merge=0/0, ticks=445/370, in_queue=815, util=88.88% 00:16:13.743 nvme0n2: ios=2609/2617, merge=0/0, ticks=448/375, in_queue=823, util=89.51% 00:16:13.743 nvme0n3: ios=2482/2560, merge=0/0, ticks=457/388, in_queue=845, util=89.54% 00:16:13.743 nvme0n4: ios=2397/2560, merge=0/0, ticks=412/383, in_queue=795, util=89.79% 00:16:13.743 02:13:27 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:16:13.743 [global] 00:16:13.743 thread=1 00:16:13.743 invalidate=1 00:16:13.743 rw=write 00:16:13.743 time_based=1 00:16:13.743 runtime=1 00:16:13.743 ioengine=libaio 00:16:13.743 direct=1 00:16:13.743 bs=4096 00:16:13.743 iodepth=128 00:16:13.743 norandommap=0 00:16:13.743 numjobs=1 00:16:13.743 00:16:13.743 verify_dump=1 00:16:13.743 verify_backlog=512 00:16:13.743 verify_state_save=0 00:16:13.743 do_verify=1 00:16:13.743 verify=crc32c-intel 00:16:13.743 [job0] 00:16:13.743 filename=/dev/nvme0n1 00:16:13.743 [job1] 00:16:13.743 filename=/dev/nvme0n2 00:16:13.743 [job2] 00:16:13.743 filename=/dev/nvme0n3 00:16:13.743 [job3] 00:16:13.743 filename=/dev/nvme0n4 00:16:13.743 Could not set queue depth (nvme0n1) 00:16:13.743 Could not set queue depth (nvme0n2) 00:16:13.743 Could not set queue depth (nvme0n3) 00:16:13.743 Could not set queue depth (nvme0n4) 00:16:13.743 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:13.743 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:13.743 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:13.743 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:13.743 fio-3.35 00:16:13.743 Starting 4 threads 00:16:15.125 00:16:15.125 job0: (groupid=0, jobs=1): err= 0: pid=74835: Tue May 14 02:13:29 2024 00:16:15.125 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:16:15.125 slat (usec): min=6, max=5748, avg=92.68, stdev=496.87 00:16:15.125 clat (usec): min=7181, max=18240, avg=12040.08, stdev=1337.01 00:16:15.125 lat (usec): min=7204, max=18254, avg=12132.76, stdev=1380.42 00:16:15.125 clat percentiles (usec): 00:16:15.125 | 1.00th=[ 8356], 5.00th=[10159], 10.00th=[10552], 20.00th=[11338], 00:16:15.125 | 30.00th=[11600], 40.00th=[11863], 50.00th=[11994], 60.00th=[12125], 00:16:15.126 | 70.00th=[12387], 80.00th=[12780], 90.00th=[13435], 95.00th=[14484], 00:16:15.126 | 99.00th=[16450], 99.50th=[17171], 99.90th=[17957], 99.95th=[17957], 00:16:15.126 | 99.99th=[18220] 00:16:15.126 write: IOPS=5288, BW=20.7MiB/s (21.7MB/s)(20.7MiB/1003msec); 0 zone resets 00:16:15.126 slat (usec): min=11, max=5045, avg=92.32, stdev=464.47 00:16:15.126 clat (usec): min=290, max=18724, avg=12294.46, stdev=1573.01 00:16:15.126 lat (usec): min=4564, max=18761, avg=12386.78, stdev=1591.26 00:16:15.126 clat percentiles (usec): 00:16:15.126 | 1.00th=[ 5604], 5.00th=[ 9241], 10.00th=[10945], 20.00th=[11731], 00:16:15.126 | 30.00th=[12125], 40.00th=[12256], 50.00th=[12518], 60.00th=[12649], 00:16:15.126 | 70.00th=[12780], 80.00th=[12911], 90.00th=[13435], 95.00th=[14353], 00:16:15.126 | 99.00th=[17171], 99.50th=[17695], 99.90th=[18744], 99.95th=[18744], 00:16:15.126 | 99.99th=[18744] 00:16:15.126 bw ( KiB/s): min=20521, max=20928, per=26.47%, avg=20724.50, stdev=287.79, samples=2 00:16:15.126 iops : min= 5130, max= 5232, avg=5181.00, stdev=72.12, samples=2 00:16:15.126 lat (usec) : 500=0.01% 00:16:15.126 lat (msec) : 10=5.68%, 20=94.31% 00:16:15.126 cpu : usr=4.39%, sys=14.17%, ctx=483, majf=0, minf=13 00:16:15.126 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:16:15.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.126 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:15.126 issued rwts: total=5120,5304,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.126 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.126 job1: (groupid=0, jobs=1): err= 0: pid=74836: Tue May 14 02:13:29 2024 00:16:15.126 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:16:15.126 slat (usec): min=7, max=3897, avg=92.52, stdev=440.91 00:16:15.126 clat (usec): min=3543, max=16031, avg=12108.90, stdev=1405.61 00:16:15.126 lat (usec): min=3563, max=16064, avg=12201.42, stdev=1388.82 00:16:15.126 clat percentiles (usec): 00:16:15.126 | 1.00th=[ 8291], 5.00th=[ 9634], 10.00th=[10028], 20.00th=[11338], 00:16:15.126 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12387], 60.00th=[12649], 00:16:15.126 | 70.00th=[12780], 80.00th=[13042], 90.00th=[13435], 95.00th=[13566], 00:16:15.126 | 99.00th=[14877], 99.50th=[15139], 99.90th=[15664], 99.95th=[15664], 00:16:15.126 | 99.99th=[16057] 00:16:15.126 write: IOPS=5116, BW=20.0MiB/s (21.0MB/s)(20.0MiB/1003msec); 0 zone resets 00:16:15.126 slat (usec): min=9, max=3823, avg=95.15, stdev=391.93 00:16:15.126 clat (usec): min=489, max=16273, avg=12610.66, stdev=1379.19 00:16:15.126 lat (usec): min=3222, max=16304, avg=12705.81, stdev=1345.98 00:16:15.126 clat percentiles (usec): 00:16:15.126 | 1.00th=[ 9372], 5.00th=[ 9896], 10.00th=[10290], 20.00th=[11994], 00:16:15.126 | 30.00th=[12518], 40.00th=[12780], 50.00th=[12911], 60.00th=[13042], 00:16:15.126 | 70.00th=[13304], 80.00th=[13566], 90.00th=[13960], 95.00th=[14222], 00:16:15.126 | 99.00th=[15270], 99.50th=[15664], 99.90th=[16188], 99.95th=[16319], 00:16:15.126 | 99.99th=[16319] 00:16:15.126 bw ( KiB/s): min=20480, max=20521, per=26.18%, avg=20500.50, stdev=28.99, samples=2 00:16:15.126 iops : min= 5120, max= 5130, avg=5125.00, stdev= 7.07, samples=2 00:16:15.126 lat (usec) : 500=0.01% 00:16:15.126 lat (msec) : 4=0.31%, 10=7.90%, 20=91.78% 00:16:15.126 cpu : usr=4.39%, sys=14.47%, ctx=714, majf=0, minf=11 00:16:15.126 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:16:15.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.126 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:15.126 issued rwts: total=5120,5132,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.126 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.126 job2: (groupid=0, jobs=1): err= 0: pid=74837: Tue May 14 02:13:29 2024 00:16:15.126 read: IOPS=4417, BW=17.3MiB/s (18.1MB/s)(17.3MiB/1004msec) 00:16:15.126 slat (usec): min=5, max=4366, avg=102.00, stdev=520.28 00:16:15.126 clat (usec): min=2286, max=18134, avg=13847.13, stdev=1622.98 00:16:15.126 lat (usec): min=3483, max=18183, avg=13949.14, stdev=1605.26 00:16:15.126 clat percentiles (usec): 00:16:15.126 | 1.00th=[ 8717], 5.00th=[10945], 10.00th=[11863], 20.00th=[13435], 00:16:15.126 | 30.00th=[13698], 40.00th=[13829], 50.00th=[14091], 60.00th=[14353], 00:16:15.126 | 70.00th=[14615], 80.00th=[14877], 90.00th=[15401], 95.00th=[15795], 00:16:15.126 | 99.00th=[16581], 99.50th=[17171], 99.90th=[17695], 99.95th=[17695], 00:16:15.126 | 99.99th=[18220] 00:16:15.126 write: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:16:15.126 slat (usec): min=7, max=4832, avg=111.10, stdev=559.51 00:16:15.126 clat (usec): min=9401, max=18210, avg=14177.81, stdev=1718.12 00:16:15.126 lat (usec): min=9444, max=18248, avg=14288.92, stdev=1671.49 00:16:15.126 clat percentiles (usec): 00:16:15.126 | 1.00th=[10421], 5.00th=[11076], 10.00th=[11207], 20.00th=[11863], 00:16:15.126 | 30.00th=[14091], 40.00th=[14615], 50.00th=[14877], 60.00th=[15139], 00:16:15.126 | 70.00th=[15270], 80.00th=[15533], 90.00th=[15795], 95.00th=[15926], 00:16:15.126 | 99.00th=[16450], 99.50th=[16450], 99.90th=[16450], 99.95th=[17433], 00:16:15.126 | 99.99th=[18220] 00:16:15.126 bw ( KiB/s): min=18024, max=18840, per=23.54%, avg=18432.00, stdev=577.00, samples=2 00:16:15.126 iops : min= 4506, max= 4710, avg=4608.00, stdev=144.25, samples=2 00:16:15.126 lat (msec) : 4=0.27%, 10=0.76%, 20=98.97% 00:16:15.126 cpu : usr=4.59%, sys=12.96%, ctx=455, majf=0, minf=17 00:16:15.126 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:16:15.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.126 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:15.126 issued rwts: total=4435,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.126 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.126 job3: (groupid=0, jobs=1): err= 0: pid=74838: Tue May 14 02:13:29 2024 00:16:15.126 read: IOPS=4286, BW=16.7MiB/s (17.6MB/s)(16.8MiB/1002msec) 00:16:15.126 slat (usec): min=4, max=4341, avg=108.89, stdev=544.91 00:16:15.126 clat (usec): min=613, max=18442, avg=14023.40, stdev=1649.24 00:16:15.126 lat (usec): min=4064, max=20399, avg=14132.28, stdev=1655.81 00:16:15.126 clat percentiles (usec): 00:16:15.126 | 1.00th=[ 4948], 5.00th=[11207], 10.00th=[12125], 20.00th=[13566], 00:16:15.126 | 30.00th=[13829], 40.00th=[13829], 50.00th=[14091], 60.00th=[14222], 00:16:15.126 | 70.00th=[14615], 80.00th=[15139], 90.00th=[15664], 95.00th=[16188], 00:16:15.126 | 99.00th=[17433], 99.50th=[17695], 99.90th=[18482], 99.95th=[18482], 00:16:15.126 | 99.99th=[18482] 00:16:15.126 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:16:15.126 slat (usec): min=10, max=4514, avg=108.10, stdev=534.28 00:16:15.126 clat (usec): min=9675, max=18856, avg=14366.13, stdev=1733.88 00:16:15.126 lat (usec): min=9737, max=18875, avg=14474.23, stdev=1693.33 00:16:15.126 clat percentiles (usec): 00:16:15.126 | 1.00th=[10683], 5.00th=[11207], 10.00th=[11469], 20.00th=[11863], 00:16:15.126 | 30.00th=[14091], 40.00th=[14615], 50.00th=[15139], 60.00th=[15401], 00:16:15.126 | 70.00th=[15533], 80.00th=[15795], 90.00th=[15926], 95.00th=[16057], 00:16:15.126 | 99.00th=[16712], 99.50th=[17957], 99.90th=[18744], 99.95th=[18744], 00:16:15.126 | 99.99th=[18744] 00:16:15.126 bw ( KiB/s): min=18224, max=18677, per=23.56%, avg=18450.50, stdev=320.32, samples=2 00:16:15.126 iops : min= 4556, max= 4669, avg=4612.50, stdev=79.90, samples=2 00:16:15.126 lat (usec) : 750=0.01% 00:16:15.126 lat (msec) : 10=0.85%, 20=99.14% 00:16:15.126 cpu : usr=4.00%, sys=12.59%, ctx=474, majf=0, minf=11 00:16:15.126 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:16:15.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.126 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:15.126 issued rwts: total=4295,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.126 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.126 00:16:15.126 Run status group 0 (all jobs): 00:16:15.126 READ: bw=73.8MiB/s (77.4MB/s), 16.7MiB/s-19.9MiB/s (17.6MB/s-20.9MB/s), io=74.1MiB (77.7MB), run=1002-1004msec 00:16:15.126 WRITE: bw=76.5MiB/s (80.2MB/s), 17.9MiB/s-20.7MiB/s (18.8MB/s-21.7MB/s), io=76.8MiB (80.5MB), run=1002-1004msec 00:16:15.126 00:16:15.126 Disk stats (read/write): 00:16:15.126 nvme0n1: ios=4329/4608, merge=0/0, ticks=24208/24626, in_queue=48834, util=87.56% 00:16:15.126 nvme0n2: ios=4137/4608, merge=0/0, ticks=15725/17108, in_queue=32833, util=87.55% 00:16:15.126 nvme0n3: ios=3609/4096, merge=0/0, ticks=15327/16552, in_queue=31879, util=89.09% 00:16:15.126 nvme0n4: ios=3584/4016, merge=0/0, ticks=15324/16637, in_queue=31961, util=89.64% 00:16:15.126 02:13:29 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:16:15.126 [global] 00:16:15.126 thread=1 00:16:15.126 invalidate=1 00:16:15.126 rw=randwrite 00:16:15.126 time_based=1 00:16:15.126 runtime=1 00:16:15.126 ioengine=libaio 00:16:15.126 direct=1 00:16:15.126 bs=4096 00:16:15.126 iodepth=128 00:16:15.126 norandommap=0 00:16:15.126 numjobs=1 00:16:15.126 00:16:15.126 verify_dump=1 00:16:15.126 verify_backlog=512 00:16:15.126 verify_state_save=0 00:16:15.126 do_verify=1 00:16:15.126 verify=crc32c-intel 00:16:15.126 [job0] 00:16:15.126 filename=/dev/nvme0n1 00:16:15.126 [job1] 00:16:15.126 filename=/dev/nvme0n2 00:16:15.126 [job2] 00:16:15.126 filename=/dev/nvme0n3 00:16:15.126 [job3] 00:16:15.126 filename=/dev/nvme0n4 00:16:15.126 Could not set queue depth (nvme0n1) 00:16:15.126 Could not set queue depth (nvme0n2) 00:16:15.126 Could not set queue depth (nvme0n3) 00:16:15.126 Could not set queue depth (nvme0n4) 00:16:15.126 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:15.126 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:15.126 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:15.126 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:15.126 fio-3.35 00:16:15.126 Starting 4 threads 00:16:16.503 00:16:16.503 job0: (groupid=0, jobs=1): err= 0: pid=74893: Tue May 14 02:13:30 2024 00:16:16.503 read: IOPS=6622, BW=25.9MiB/s (27.1MB/s)(26.0MiB/1005msec) 00:16:16.503 slat (usec): min=4, max=4672, avg=70.32, stdev=360.35 00:16:16.503 clat (usec): min=5461, max=15251, avg=9298.38, stdev=1308.31 00:16:16.503 lat (usec): min=5480, max=15284, avg=9368.71, stdev=1329.75 00:16:16.503 clat percentiles (usec): 00:16:16.503 | 1.00th=[ 6063], 5.00th=[ 7242], 10.00th=[ 7963], 20.00th=[ 8455], 00:16:16.503 | 30.00th=[ 8717], 40.00th=[ 8848], 50.00th=[ 9110], 60.00th=[ 9241], 00:16:16.503 | 70.00th=[ 9765], 80.00th=[10290], 90.00th=[11076], 95.00th=[11469], 00:16:16.503 | 99.00th=[12780], 99.50th=[13566], 99.90th=[14877], 99.95th=[15008], 00:16:16.503 | 99.99th=[15270] 00:16:16.503 write: IOPS=6800, BW=26.6MiB/s (27.9MB/s)(26.7MiB/1005msec); 0 zone resets 00:16:16.503 slat (usec): min=10, max=4338, avg=70.70, stdev=321.85 00:16:16.503 clat (usec): min=4367, max=15609, avg=9544.10, stdev=1447.33 00:16:16.503 lat (usec): min=4384, max=15629, avg=9614.80, stdev=1437.20 00:16:16.503 clat percentiles (usec): 00:16:16.503 | 1.00th=[ 5932], 5.00th=[ 6652], 10.00th=[ 7701], 20.00th=[ 8848], 00:16:16.503 | 30.00th=[ 9110], 40.00th=[ 9372], 50.00th=[ 9372], 60.00th=[ 9503], 00:16:16.503 | 70.00th=[ 9765], 80.00th=[10814], 90.00th=[11469], 95.00th=[11863], 00:16:16.503 | 99.00th=[12911], 99.50th=[13829], 99.90th=[15533], 99.95th=[15533], 00:16:16.503 | 99.99th=[15664] 00:16:16.503 bw ( KiB/s): min=25368, max=28296, per=53.77%, avg=26832.00, stdev=2070.41, samples=2 00:16:16.503 iops : min= 6342, max= 7074, avg=6708.00, stdev=517.60, samples=2 00:16:16.503 lat (msec) : 10=72.82%, 20=27.18% 00:16:16.503 cpu : usr=6.08%, sys=16.73%, ctx=709, majf=0, minf=7 00:16:16.503 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:16:16.503 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:16.503 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:16.503 issued rwts: total=6656,6835,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:16.503 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:16.503 job1: (groupid=0, jobs=1): err= 0: pid=74894: Tue May 14 02:13:30 2024 00:16:16.503 read: IOPS=1719, BW=6878KiB/s (7043kB/s)(8288KiB/1205msec) 00:16:16.503 slat (usec): min=2, max=44572, avg=243.98, stdev=1497.35 00:16:16.503 clat (msec): min=8, max=271, avg=30.39, stdev=28.86 00:16:16.503 lat (msec): min=8, max=271, avg=30.64, stdev=29.13 00:16:16.503 clat percentiles (msec): 00:16:16.503 | 1.00th=[ 10], 5.00th=[ 11], 10.00th=[ 11], 20.00th=[ 15], 00:16:16.503 | 30.00th=[ 19], 40.00th=[ 22], 50.00th=[ 23], 60.00th=[ 28], 00:16:16.503 | 70.00th=[ 32], 80.00th=[ 43], 90.00th=[ 55], 95.00th=[ 65], 00:16:16.503 | 99.00th=[ 218], 99.50th=[ 268], 99.90th=[ 271], 99.95th=[ 271], 00:16:16.503 | 99.99th=[ 271] 00:16:16.503 write: IOPS=2124, BW=8498KiB/s (8702kB/s)(10.0MiB/1205msec); 0 zone resets 00:16:16.503 slat (usec): min=4, max=10541, avg=185.85, stdev=879.90 00:16:16.503 clat (msec): min=7, max=297, avg=34.87, stdev=53.18 00:16:16.503 lat (msec): min=7, max=297, avg=35.06, stdev=53.20 00:16:16.503 clat percentiles (msec): 00:16:16.503 | 1.00th=[ 9], 5.00th=[ 12], 10.00th=[ 15], 20.00th=[ 18], 00:16:16.503 | 30.00th=[ 21], 40.00th=[ 22], 50.00th=[ 24], 60.00th=[ 25], 00:16:16.503 | 70.00th=[ 28], 80.00th=[ 31], 90.00th=[ 39], 95.00th=[ 55], 00:16:16.503 | 99.00th=[ 296], 99.50th=[ 296], 99.90th=[ 296], 99.95th=[ 296], 00:16:16.503 | 99.99th=[ 300] 00:16:16.503 bw ( KiB/s): min= 7368, max=12288, per=19.69%, avg=9828.00, stdev=3478.97, samples=2 00:16:16.503 iops : min= 1842, max= 3072, avg=2457.00, stdev=869.74, samples=2 00:16:16.503 lat (msec) : 10=2.18%, 20=28.52%, 50=59.69%, 100=6.87%, 250=0.11% 00:16:16.504 lat (msec) : 500=2.63% 00:16:16.504 cpu : usr=1.91%, sys=5.65%, ctx=644, majf=0, minf=13 00:16:16.504 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:16:16.504 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:16.504 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:16.504 issued rwts: total=2072,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:16.504 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:16.504 job2: (groupid=0, jobs=1): err= 0: pid=74895: Tue May 14 02:13:30 2024 00:16:16.504 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:16:16.504 slat (usec): min=5, max=12209, avg=129.45, stdev=759.76 00:16:16.504 clat (usec): min=4917, max=44536, avg=15071.93, stdev=5953.15 00:16:16.504 lat (usec): min=4932, max=44552, avg=15201.38, stdev=6013.75 00:16:16.504 clat percentiles (usec): 00:16:16.504 | 1.00th=[ 8356], 5.00th=[ 9372], 10.00th=[10421], 20.00th=[11338], 00:16:16.504 | 30.00th=[12256], 40.00th=[12649], 50.00th=[12911], 60.00th=[13566], 00:16:16.504 | 70.00th=[15664], 80.00th=[18482], 90.00th=[22414], 95.00th=[27919], 00:16:16.504 | 99.00th=[38536], 99.50th=[41681], 99.90th=[44303], 99.95th=[44303], 00:16:16.504 | 99.99th=[44303] 00:16:16.504 write: IOPS=3426, BW=13.4MiB/s (14.0MB/s)(13.4MiB/1004msec); 0 zone resets 00:16:16.504 slat (usec): min=6, max=8386, avg=167.09, stdev=701.92 00:16:16.504 clat (usec): min=2220, max=66308, avg=23385.35, stdev=14306.94 00:16:16.504 lat (usec): min=3773, max=66323, avg=23552.44, stdev=14393.83 00:16:16.504 clat percentiles (usec): 00:16:16.504 | 1.00th=[ 5014], 5.00th=[ 8979], 10.00th=[10814], 20.00th=[12256], 00:16:16.504 | 30.00th=[13173], 40.00th=[13960], 50.00th=[14877], 60.00th=[23987], 00:16:16.504 | 70.00th=[29754], 80.00th=[35390], 90.00th=[44827], 95.00th=[52691], 00:16:16.504 | 99.00th=[64226], 99.50th=[65799], 99.90th=[66323], 99.95th=[66323], 00:16:16.504 | 99.99th=[66323] 00:16:16.504 bw ( KiB/s): min=10128, max=16400, per=26.58%, avg=13264.00, stdev=4434.97, samples=2 00:16:16.504 iops : min= 2532, max= 4100, avg=3316.00, stdev=1108.74, samples=2 00:16:16.504 lat (msec) : 4=0.09%, 10=6.46%, 20=63.84%, 50=26.60%, 100=3.01% 00:16:16.504 cpu : usr=3.19%, sys=9.37%, ctx=469, majf=0, minf=15 00:16:16.504 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:16:16.504 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:16.504 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:16.504 issued rwts: total=3072,3440,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:16.504 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:16.504 job3: (groupid=0, jobs=1): err= 0: pid=74896: Tue May 14 02:13:30 2024 00:16:16.504 read: IOPS=1696, BW=6787KiB/s (6950kB/s)(8192KiB/1207msec) 00:16:16.504 slat (usec): min=4, max=39262, avg=217.16, stdev=1413.98 00:16:16.504 clat (usec): min=14427, max=75511, avg=27476.51, stdev=9669.35 00:16:16.504 lat (usec): min=14443, max=75538, avg=27693.67, stdev=9783.72 00:16:16.504 clat percentiles (usec): 00:16:16.504 | 1.00th=[14615], 5.00th=[18744], 10.00th=[20055], 20.00th=[21365], 00:16:16.504 | 30.00th=[21627], 40.00th=[23725], 50.00th=[25297], 60.00th=[26870], 00:16:16.504 | 70.00th=[28443], 80.00th=[31589], 90.00th=[34341], 95.00th=[58983], 00:16:16.504 | 99.00th=[60031], 99.50th=[60031], 99.90th=[61080], 99.95th=[73925], 00:16:16.504 | 99.99th=[76022] 00:16:16.504 write: IOPS=1842, BW=7370KiB/s (7547kB/s)(8896KiB/1207msec); 0 zone resets 00:16:16.504 slat (usec): min=4, max=26417, avg=242.33, stdev=1147.07 00:16:16.504 clat (msec): min=11, max=297, avg=43.23, stdev=57.10 00:16:16.504 lat (msec): min=11, max=298, avg=43.48, stdev=57.22 00:16:16.504 clat percentiles (msec): 00:16:16.504 | 1.00th=[ 15], 5.00th=[ 18], 10.00th=[ 20], 20.00th=[ 22], 00:16:16.504 | 30.00th=[ 23], 40.00th=[ 24], 50.00th=[ 26], 60.00th=[ 28], 00:16:16.504 | 70.00th=[ 32], 80.00th=[ 45], 90.00th=[ 61], 95.00th=[ 236], 00:16:16.504 | 99.00th=[ 296], 99.50th=[ 296], 99.90th=[ 300], 99.95th=[ 300], 00:16:16.504 | 99.99th=[ 300] 00:16:16.504 bw ( KiB/s): min= 6464, max=10312, per=16.81%, avg=8388.00, stdev=2720.95, samples=2 00:16:16.504 iops : min= 1616, max= 2578, avg=2097.00, stdev=680.24, samples=2 00:16:16.504 lat (msec) : 20=10.91%, 50=77.74%, 100=8.38%, 250=1.10%, 500=1.87% 00:16:16.504 cpu : usr=1.66%, sys=5.56%, ctx=547, majf=0, minf=17 00:16:16.504 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:16:16.504 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:16.504 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:16.504 issued rwts: total=2048,2224,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:16.504 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:16.504 00:16:16.504 Run status group 0 (all jobs): 00:16:16.504 READ: bw=44.8MiB/s (47.0MB/s), 6787KiB/s-25.9MiB/s (6950kB/s-27.1MB/s), io=54.1MiB (56.7MB), run=1004-1207msec 00:16:16.504 WRITE: bw=48.7MiB/s (51.1MB/s), 7370KiB/s-26.6MiB/s (7547kB/s-27.9MB/s), io=58.8MiB (61.7MB), run=1004-1207msec 00:16:16.504 00:16:16.504 Disk stats (read/write): 00:16:16.504 nvme0n1: ios=5682/5861, merge=0/0, ticks=24288/23776, in_queue=48064, util=88.37% 00:16:16.504 nvme0n2: ios=2106/2560, merge=0/0, ticks=27972/31647, in_queue=59619, util=90.30% 00:16:16.504 nvme0n3: ios=2560/2791, merge=0/0, ticks=26272/47754, in_queue=74026, util=89.13% 00:16:16.504 nvme0n4: ios=2048/2167, merge=0/0, ticks=26737/29574, in_queue=56311, util=91.40% 00:16:16.504 02:13:30 -- target/fio.sh@55 -- # sync 00:16:16.504 02:13:30 -- target/fio.sh@59 -- # fio_pid=74915 00:16:16.504 02:13:30 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:16:16.504 02:13:30 -- target/fio.sh@61 -- # sleep 3 00:16:16.504 [global] 00:16:16.504 thread=1 00:16:16.504 invalidate=1 00:16:16.504 rw=read 00:16:16.504 time_based=1 00:16:16.504 runtime=10 00:16:16.504 ioengine=libaio 00:16:16.504 direct=1 00:16:16.504 bs=4096 00:16:16.504 iodepth=1 00:16:16.504 norandommap=1 00:16:16.504 numjobs=1 00:16:16.504 00:16:16.504 [job0] 00:16:16.504 filename=/dev/nvme0n1 00:16:16.504 [job1] 00:16:16.504 filename=/dev/nvme0n2 00:16:16.504 [job2] 00:16:16.504 filename=/dev/nvme0n3 00:16:16.504 [job3] 00:16:16.504 filename=/dev/nvme0n4 00:16:16.504 Could not set queue depth (nvme0n1) 00:16:16.504 Could not set queue depth (nvme0n2) 00:16:16.504 Could not set queue depth (nvme0n3) 00:16:16.504 Could not set queue depth (nvme0n4) 00:16:16.763 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:16.763 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:16.763 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:16.763 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:16.763 fio-3.35 00:16:16.763 Starting 4 threads 00:16:20.046 02:13:33 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:16:20.046 fio: pid=74962, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:20.046 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=59707392, buflen=4096 00:16:20.046 02:13:34 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:16:20.046 fio: pid=74961, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:20.046 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=67706880, buflen=4096 00:16:20.046 02:13:34 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:20.046 02:13:34 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:16:20.306 fio: pid=74959, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:20.306 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=7520256, buflen=4096 00:16:20.306 02:13:34 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:20.306 02:13:34 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:16:20.565 fio: pid=74960, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:20.565 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=12677120, buflen=4096 00:16:20.565 02:13:34 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:20.565 02:13:34 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:16:20.565 00:16:20.565 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=74959: Tue May 14 02:13:34 2024 00:16:20.565 read: IOPS=5327, BW=20.8MiB/s (21.8MB/s)(71.2MiB/3420msec) 00:16:20.565 slat (usec): min=8, max=11853, avg=18.75, stdev=144.56 00:16:20.565 clat (usec): min=11, max=7711, avg=167.44, stdev=71.56 00:16:20.565 lat (usec): min=143, max=12086, avg=186.19, stdev=161.43 00:16:20.565 clat percentiles (usec): 00:16:20.565 | 1.00th=[ 137], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 151], 00:16:20.565 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 161], 00:16:20.565 | 70.00th=[ 165], 80.00th=[ 172], 90.00th=[ 186], 95.00th=[ 233], 00:16:20.565 | 99.00th=[ 306], 99.50th=[ 322], 99.90th=[ 433], 99.95th=[ 775], 00:16:20.565 | 99.99th=[ 3392] 00:16:20.565 bw ( KiB/s): min=21520, max=22896, per=29.64%, avg=22317.33, stdev=599.13, samples=6 00:16:20.565 iops : min= 5380, max= 5724, avg=5579.33, stdev=149.78, samples=6 00:16:20.565 lat (usec) : 20=0.01%, 50=0.01%, 250=96.32%, 500=3.58%, 750=0.02% 00:16:20.565 lat (usec) : 1000=0.02% 00:16:20.565 lat (msec) : 2=0.02%, 4=0.01%, 10=0.01% 00:16:20.565 cpu : usr=1.73%, sys=7.17%, ctx=18237, majf=0, minf=1 00:16:20.565 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:20.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.565 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.565 issued rwts: total=18221,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:20.565 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:20.565 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=74960: Tue May 14 02:13:34 2024 00:16:20.565 read: IOPS=5329, BW=20.8MiB/s (21.8MB/s)(76.1MiB/3655msec) 00:16:20.565 slat (usec): min=8, max=8286, avg=17.71, stdev=132.54 00:16:20.565 clat (usec): min=128, max=3817, avg=168.56, stdev=57.29 00:16:20.565 lat (usec): min=142, max=8457, avg=186.27, stdev=145.27 00:16:20.565 clat percentiles (usec): 00:16:20.565 | 1.00th=[ 139], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 153], 00:16:20.565 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 163], 00:16:20.565 | 70.00th=[ 167], 80.00th=[ 174], 90.00th=[ 186], 95.00th=[ 231], 00:16:20.565 | 99.00th=[ 297], 99.50th=[ 310], 99.90th=[ 465], 99.95th=[ 807], 00:16:20.565 | 99.99th=[ 3326] 00:16:20.565 bw ( KiB/s): min=16773, max=22728, per=28.54%, avg=21488.71, stdev=2109.40, samples=7 00:16:20.565 iops : min= 4193, max= 5682, avg=5372.14, stdev=527.44, samples=7 00:16:20.565 lat (usec) : 250=96.77%, 500=3.13%, 750=0.04%, 1000=0.01% 00:16:20.565 lat (msec) : 2=0.03%, 4=0.02% 00:16:20.565 cpu : usr=1.45%, sys=6.79%, ctx=19494, majf=0, minf=1 00:16:20.565 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:20.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.565 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.565 issued rwts: total=19480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:20.565 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:20.565 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=74961: Tue May 14 02:13:34 2024 00:16:20.565 read: IOPS=5172, BW=20.2MiB/s (21.2MB/s)(64.6MiB/3196msec) 00:16:20.565 slat (usec): min=8, max=11201, avg=18.30, stdev=106.21 00:16:20.565 clat (usec): min=42, max=2536, avg=173.53, stdev=40.42 00:16:20.565 lat (usec): min=153, max=11387, avg=191.83, stdev=113.56 00:16:20.565 clat percentiles (usec): 00:16:20.565 | 1.00th=[ 147], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 159], 00:16:20.565 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 167], 60.00th=[ 172], 00:16:20.565 | 70.00th=[ 176], 80.00th=[ 180], 90.00th=[ 190], 95.00th=[ 202], 00:16:20.565 | 99.00th=[ 306], 99.50th=[ 318], 99.90th=[ 371], 99.95th=[ 510], 00:16:20.565 | 99.99th=[ 2180] 00:16:20.565 bw ( KiB/s): min=20464, max=21936, per=28.17%, avg=21212.00, stdev=503.73, samples=6 00:16:20.565 iops : min= 5116, max= 5484, avg=5303.00, stdev=125.93, samples=6 00:16:20.565 lat (usec) : 50=0.01%, 250=96.50%, 500=3.44%, 750=0.01%, 1000=0.02% 00:16:20.565 lat (msec) : 2=0.01%, 4=0.01% 00:16:20.565 cpu : usr=1.38%, sys=7.45%, ctx=16542, majf=0, minf=1 00:16:20.565 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:20.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.565 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.565 issued rwts: total=16531,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:20.565 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:20.566 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=74962: Tue May 14 02:13:34 2024 00:16:20.566 read: IOPS=4940, BW=19.3MiB/s (20.2MB/s)(56.9MiB/2951msec) 00:16:20.566 slat (nsec): min=12994, max=77712, avg=16275.63, stdev=3585.46 00:16:20.566 clat (usec): min=141, max=2529, avg=184.59, stdev=33.16 00:16:20.566 lat (usec): min=156, max=2554, avg=200.87, stdev=33.67 00:16:20.566 clat percentiles (usec): 00:16:20.566 | 1.00th=[ 151], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 163], 00:16:20.566 | 30.00th=[ 167], 40.00th=[ 174], 50.00th=[ 180], 60.00th=[ 188], 00:16:20.566 | 70.00th=[ 198], 80.00th=[ 206], 90.00th=[ 219], 95.00th=[ 227], 00:16:20.566 | 99.00th=[ 245], 99.50th=[ 251], 99.90th=[ 343], 99.95th=[ 510], 00:16:20.566 | 99.99th=[ 996] 00:16:20.566 bw ( KiB/s): min=18136, max=21864, per=25.79%, avg=19419.20, stdev=1652.97, samples=5 00:16:20.566 iops : min= 4534, max= 5466, avg=4854.80, stdev=413.24, samples=5 00:16:20.566 lat (usec) : 250=99.45%, 500=0.49%, 750=0.03%, 1000=0.01% 00:16:20.566 lat (msec) : 4=0.01% 00:16:20.566 cpu : usr=1.59%, sys=6.81%, ctx=14578, majf=0, minf=1 00:16:20.566 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:20.566 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.566 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.566 issued rwts: total=14578,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:20.566 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:20.566 00:16:20.566 Run status group 0 (all jobs): 00:16:20.566 READ: bw=73.5MiB/s (77.1MB/s), 19.3MiB/s-20.8MiB/s (20.2MB/s-21.8MB/s), io=269MiB (282MB), run=2951-3655msec 00:16:20.566 00:16:20.566 Disk stats (read/write): 00:16:20.566 nvme0n1: ios=18028/0, merge=0/0, ticks=3063/0, in_queue=3063, util=95.25% 00:16:20.566 nvme0n2: ios=19285/0, merge=0/0, ticks=3295/0, in_queue=3295, util=95.56% 00:16:20.566 nvme0n3: ios=16287/0, merge=0/0, ticks=2850/0, in_queue=2850, util=96.21% 00:16:20.566 nvme0n4: ios=14145/0, merge=0/0, ticks=2675/0, in_queue=2675, util=96.70% 00:16:20.824 02:13:35 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:20.824 02:13:35 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:16:20.824 02:13:35 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:20.824 02:13:35 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:16:21.391 02:13:35 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:21.391 02:13:35 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:16:21.391 02:13:35 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:21.391 02:13:35 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:16:21.649 02:13:36 -- target/fio.sh@69 -- # fio_status=0 00:16:21.649 02:13:36 -- target/fio.sh@70 -- # wait 74915 00:16:21.649 02:13:36 -- target/fio.sh@70 -- # fio_status=4 00:16:21.649 02:13:36 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:21.649 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:21.649 02:13:36 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:21.649 02:13:36 -- common/autotest_common.sh@1198 -- # local i=0 00:16:21.649 02:13:36 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:16:21.649 02:13:36 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:21.649 02:13:36 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:16:21.649 02:13:36 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:21.649 nvmf hotplug test: fio failed as expected 00:16:21.649 02:13:36 -- common/autotest_common.sh@1210 -- # return 0 00:16:21.649 02:13:36 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:16:21.649 02:13:36 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:16:21.649 02:13:36 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:22.217 02:13:36 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:16:22.217 02:13:36 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:16:22.217 02:13:36 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:16:22.217 02:13:36 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:16:22.217 02:13:36 -- target/fio.sh@91 -- # nvmftestfini 00:16:22.217 02:13:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:22.217 02:13:36 -- nvmf/common.sh@116 -- # sync 00:16:22.217 02:13:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:22.217 02:13:36 -- nvmf/common.sh@119 -- # set +e 00:16:22.217 02:13:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:22.217 02:13:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:22.217 rmmod nvme_tcp 00:16:22.217 rmmod nvme_fabrics 00:16:22.217 rmmod nvme_keyring 00:16:22.217 02:13:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:22.217 02:13:36 -- nvmf/common.sh@123 -- # set -e 00:16:22.217 02:13:36 -- nvmf/common.sh@124 -- # return 0 00:16:22.217 02:13:36 -- nvmf/common.sh@477 -- # '[' -n 74424 ']' 00:16:22.217 02:13:36 -- nvmf/common.sh@478 -- # killprocess 74424 00:16:22.217 02:13:36 -- common/autotest_common.sh@926 -- # '[' -z 74424 ']' 00:16:22.217 02:13:36 -- common/autotest_common.sh@930 -- # kill -0 74424 00:16:22.217 02:13:36 -- common/autotest_common.sh@931 -- # uname 00:16:22.217 02:13:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:22.217 02:13:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 74424 00:16:22.217 killing process with pid 74424 00:16:22.217 02:13:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:22.217 02:13:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:22.217 02:13:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 74424' 00:16:22.217 02:13:36 -- common/autotest_common.sh@945 -- # kill 74424 00:16:22.217 02:13:36 -- common/autotest_common.sh@950 -- # wait 74424 00:16:22.217 02:13:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:22.217 02:13:36 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:22.217 02:13:36 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:22.217 02:13:36 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:22.217 02:13:36 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:22.477 02:13:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:22.477 02:13:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:22.477 02:13:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:22.477 02:13:36 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:22.477 00:16:22.477 real 0m19.496s 00:16:22.477 user 1m13.765s 00:16:22.477 sys 0m9.853s 00:16:22.477 02:13:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:22.477 02:13:36 -- common/autotest_common.sh@10 -- # set +x 00:16:22.477 ************************************ 00:16:22.477 END TEST nvmf_fio_target 00:16:22.477 ************************************ 00:16:22.477 02:13:36 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:22.477 02:13:36 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:22.477 02:13:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:22.477 02:13:36 -- common/autotest_common.sh@10 -- # set +x 00:16:22.477 ************************************ 00:16:22.477 START TEST nvmf_bdevio 00:16:22.477 ************************************ 00:16:22.477 02:13:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:22.477 * Looking for test storage... 00:16:22.477 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:22.477 02:13:36 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:22.477 02:13:36 -- nvmf/common.sh@7 -- # uname -s 00:16:22.477 02:13:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:22.477 02:13:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:22.477 02:13:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:22.477 02:13:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:22.477 02:13:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:22.477 02:13:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:22.477 02:13:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:22.477 02:13:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:22.477 02:13:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:22.477 02:13:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:22.477 02:13:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:16:22.477 02:13:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:16:22.477 02:13:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:22.477 02:13:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:22.477 02:13:36 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:22.477 02:13:36 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:22.477 02:13:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:22.477 02:13:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:22.477 02:13:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:22.477 02:13:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.477 02:13:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.477 02:13:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.477 02:13:36 -- paths/export.sh@5 -- # export PATH 00:16:22.478 02:13:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.478 02:13:36 -- nvmf/common.sh@46 -- # : 0 00:16:22.478 02:13:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:22.478 02:13:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:22.478 02:13:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:22.478 02:13:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:22.478 02:13:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:22.478 02:13:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:22.478 02:13:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:22.478 02:13:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:22.478 02:13:36 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:22.478 02:13:36 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:22.478 02:13:36 -- target/bdevio.sh@14 -- # nvmftestinit 00:16:22.478 02:13:36 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:22.478 02:13:36 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:22.478 02:13:36 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:22.478 02:13:36 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:22.478 02:13:36 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:22.478 02:13:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:22.478 02:13:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:22.478 02:13:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:22.478 02:13:37 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:22.478 02:13:37 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:22.478 02:13:37 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:22.478 02:13:37 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:22.478 02:13:37 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:22.478 02:13:37 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:22.478 02:13:37 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:22.478 02:13:37 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:22.478 02:13:37 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:22.478 02:13:37 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:22.478 02:13:37 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:22.478 02:13:37 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:22.478 02:13:37 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:22.478 02:13:37 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:22.478 02:13:37 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:22.478 02:13:37 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:22.478 02:13:37 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:22.478 02:13:37 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:22.478 02:13:37 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:22.478 02:13:37 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:22.478 Cannot find device "nvmf_tgt_br" 00:16:22.478 02:13:37 -- nvmf/common.sh@154 -- # true 00:16:22.478 02:13:37 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:22.478 Cannot find device "nvmf_tgt_br2" 00:16:22.478 02:13:37 -- nvmf/common.sh@155 -- # true 00:16:22.478 02:13:37 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:22.478 02:13:37 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:22.478 Cannot find device "nvmf_tgt_br" 00:16:22.478 02:13:37 -- nvmf/common.sh@157 -- # true 00:16:22.478 02:13:37 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:22.736 Cannot find device "nvmf_tgt_br2" 00:16:22.736 02:13:37 -- nvmf/common.sh@158 -- # true 00:16:22.736 02:13:37 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:22.736 02:13:37 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:22.736 02:13:37 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:22.736 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:22.736 02:13:37 -- nvmf/common.sh@161 -- # true 00:16:22.736 02:13:37 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:22.736 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:22.736 02:13:37 -- nvmf/common.sh@162 -- # true 00:16:22.736 02:13:37 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:22.736 02:13:37 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:22.736 02:13:37 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:22.736 02:13:37 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:22.736 02:13:37 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:22.736 02:13:37 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:22.736 02:13:37 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:22.736 02:13:37 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:22.736 02:13:37 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:22.736 02:13:37 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:22.736 02:13:37 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:22.736 02:13:37 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:22.736 02:13:37 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:22.736 02:13:37 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:22.736 02:13:37 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:22.736 02:13:37 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:22.737 02:13:37 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:22.737 02:13:37 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:22.737 02:13:37 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:22.737 02:13:37 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:22.737 02:13:37 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:22.737 02:13:37 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:22.737 02:13:37 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:22.997 02:13:37 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:22.997 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:22.997 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:16:22.997 00:16:22.997 --- 10.0.0.2 ping statistics --- 00:16:22.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:22.997 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:16:22.997 02:13:37 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:22.997 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:22.997 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:16:22.997 00:16:22.997 --- 10.0.0.3 ping statistics --- 00:16:22.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:22.997 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:16:22.997 02:13:37 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:22.997 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:22.997 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:16:22.997 00:16:22.997 --- 10.0.0.1 ping statistics --- 00:16:22.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:22.997 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:16:22.997 02:13:37 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:22.997 02:13:37 -- nvmf/common.sh@421 -- # return 0 00:16:22.997 02:13:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:22.997 02:13:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:22.997 02:13:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:22.997 02:13:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:22.997 02:13:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:22.997 02:13:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:22.997 02:13:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:22.997 02:13:37 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:22.997 02:13:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:22.997 02:13:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:22.997 02:13:37 -- common/autotest_common.sh@10 -- # set +x 00:16:22.997 02:13:37 -- nvmf/common.sh@469 -- # nvmfpid=75281 00:16:22.997 02:13:37 -- nvmf/common.sh@470 -- # waitforlisten 75281 00:16:22.997 02:13:37 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:16:22.997 02:13:37 -- common/autotest_common.sh@819 -- # '[' -z 75281 ']' 00:16:22.997 02:13:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:22.997 02:13:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:22.997 02:13:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:22.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:22.997 02:13:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:22.997 02:13:37 -- common/autotest_common.sh@10 -- # set +x 00:16:22.997 [2024-05-14 02:13:37.433039] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:16:22.997 [2024-05-14 02:13:37.433130] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:22.997 [2024-05-14 02:13:37.573329] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:23.257 [2024-05-14 02:13:37.642569] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:23.257 [2024-05-14 02:13:37.643018] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:23.257 [2024-05-14 02:13:37.643159] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:23.257 [2024-05-14 02:13:37.643268] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:23.257 [2024-05-14 02:13:37.643929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:23.257 [2024-05-14 02:13:37.644040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:23.257 [2024-05-14 02:13:37.644199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:23.257 [2024-05-14 02:13:37.644299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:23.824 02:13:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:23.824 02:13:38 -- common/autotest_common.sh@852 -- # return 0 00:16:23.824 02:13:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:23.824 02:13:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:23.824 02:13:38 -- common/autotest_common.sh@10 -- # set +x 00:16:24.100 02:13:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:24.100 02:13:38 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:24.100 02:13:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:24.100 02:13:38 -- common/autotest_common.sh@10 -- # set +x 00:16:24.100 [2024-05-14 02:13:38.430760] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:24.100 02:13:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:24.100 02:13:38 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:24.100 02:13:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:24.100 02:13:38 -- common/autotest_common.sh@10 -- # set +x 00:16:24.100 Malloc0 00:16:24.101 02:13:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:24.101 02:13:38 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:24.101 02:13:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:24.101 02:13:38 -- common/autotest_common.sh@10 -- # set +x 00:16:24.101 02:13:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:24.101 02:13:38 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:24.101 02:13:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:24.101 02:13:38 -- common/autotest_common.sh@10 -- # set +x 00:16:24.101 02:13:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:24.101 02:13:38 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:24.101 02:13:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:24.101 02:13:38 -- common/autotest_common.sh@10 -- # set +x 00:16:24.101 [2024-05-14 02:13:38.486517] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:24.101 02:13:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:24.101 02:13:38 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:16:24.101 02:13:38 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:24.101 02:13:38 -- nvmf/common.sh@520 -- # config=() 00:16:24.101 02:13:38 -- nvmf/common.sh@520 -- # local subsystem config 00:16:24.101 02:13:38 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:24.101 02:13:38 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:24.101 { 00:16:24.101 "params": { 00:16:24.101 "name": "Nvme$subsystem", 00:16:24.101 "trtype": "$TEST_TRANSPORT", 00:16:24.101 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:24.101 "adrfam": "ipv4", 00:16:24.101 "trsvcid": "$NVMF_PORT", 00:16:24.101 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:24.101 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:24.101 "hdgst": ${hdgst:-false}, 00:16:24.101 "ddgst": ${ddgst:-false} 00:16:24.101 }, 00:16:24.101 "method": "bdev_nvme_attach_controller" 00:16:24.101 } 00:16:24.101 EOF 00:16:24.101 )") 00:16:24.101 02:13:38 -- nvmf/common.sh@542 -- # cat 00:16:24.101 02:13:38 -- nvmf/common.sh@544 -- # jq . 00:16:24.101 02:13:38 -- nvmf/common.sh@545 -- # IFS=, 00:16:24.101 02:13:38 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:24.101 "params": { 00:16:24.101 "name": "Nvme1", 00:16:24.101 "trtype": "tcp", 00:16:24.101 "traddr": "10.0.0.2", 00:16:24.101 "adrfam": "ipv4", 00:16:24.101 "trsvcid": "4420", 00:16:24.101 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:24.101 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:24.101 "hdgst": false, 00:16:24.101 "ddgst": false 00:16:24.101 }, 00:16:24.101 "method": "bdev_nvme_attach_controller" 00:16:24.101 }' 00:16:24.101 [2024-05-14 02:13:38.541455] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:16:24.101 [2024-05-14 02:13:38.541543] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75335 ] 00:16:24.359 [2024-05-14 02:13:38.682784] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:24.359 [2024-05-14 02:13:38.753022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:24.359 [2024-05-14 02:13:38.753107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:24.359 [2024-05-14 02:13:38.753114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.359 [2024-05-14 02:13:38.893117] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:16:24.359 [2024-05-14 02:13:38.893174] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:16:24.359 I/O targets: 00:16:24.359 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:24.359 00:16:24.359 00:16:24.359 CUnit - A unit testing framework for C - Version 2.1-3 00:16:24.359 http://cunit.sourceforge.net/ 00:16:24.359 00:16:24.359 00:16:24.359 Suite: bdevio tests on: Nvme1n1 00:16:24.359 Test: blockdev write read block ...passed 00:16:24.617 Test: blockdev write zeroes read block ...passed 00:16:24.617 Test: blockdev write zeroes read no split ...passed 00:16:24.617 Test: blockdev write zeroes read split ...passed 00:16:24.617 Test: blockdev write zeroes read split partial ...passed 00:16:24.617 Test: blockdev reset ...[2024-05-14 02:13:39.011587] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:24.617 [2024-05-14 02:13:39.011722] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaea810 (9): Bad file descriptor 00:16:24.617 [2024-05-14 02:13:39.023009] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:24.617 passed 00:16:24.617 Test: blockdev write read 8 blocks ...passed 00:16:24.617 Test: blockdev write read size > 128k ...passed 00:16:24.617 Test: blockdev write read invalid size ...passed 00:16:24.617 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:24.617 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:24.617 Test: blockdev write read max offset ...passed 00:16:24.617 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:24.617 Test: blockdev writev readv 8 blocks ...passed 00:16:24.617 Test: blockdev writev readv 30 x 1block ...passed 00:16:24.617 Test: blockdev writev readv block ...passed 00:16:24.617 Test: blockdev writev readv size > 128k ...passed 00:16:24.617 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:24.617 Test: blockdev comparev and writev ...[2024-05-14 02:13:39.203338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:24.617 [2024-05-14 02:13:39.203413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.617 [2024-05-14 02:13:39.203445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:24.618 [2024-05-14 02:13:39.203463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:24.618 [2024-05-14 02:13:39.203941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:24.618 [2024-05-14 02:13:39.203985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:24.618 [2024-05-14 02:13:39.204013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:24.618 [2024-05-14 02:13:39.204029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:24.618 [2024-05-14 02:13:39.204562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:24.618 [2024-05-14 02:13:39.204604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:24.618 [2024-05-14 02:13:39.204631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:24.618 [2024-05-14 02:13:39.204647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:24.618 [2024-05-14 02:13:39.205222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:24.618 [2024-05-14 02:13:39.205253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:24.618 [2024-05-14 02:13:39.205279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:24.618 [2024-05-14 02:13:39.205294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:24.876 passed 00:16:24.876 Test: blockdev nvme passthru rw ...passed 00:16:24.876 Test: blockdev nvme passthru vendor specific ...[2024-05-14 02:13:39.287239] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:24.876 [2024-05-14 02:13:39.287274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:24.876 [2024-05-14 02:13:39.287623] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:24.876 [2024-05-14 02:13:39.287653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:24.876 [2024-05-14 02:13:39.287787] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:24.876 [2024-05-14 02:13:39.287808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:24.876 [2024-05-14 02:13:39.287975] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:24.876 [2024-05-14 02:13:39.288009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:24.876 passed 00:16:24.876 Test: blockdev nvme admin passthru ...passed 00:16:24.876 Test: blockdev copy ...passed 00:16:24.876 00:16:24.876 Run Summary: Type Total Ran Passed Failed Inactive 00:16:24.876 suites 1 1 n/a 0 0 00:16:24.876 tests 23 23 23 0 0 00:16:24.876 asserts 152 152 152 0 n/a 00:16:24.876 00:16:24.876 Elapsed time = 0.899 seconds 00:16:25.134 02:13:39 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:25.134 02:13:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:25.134 02:13:39 -- common/autotest_common.sh@10 -- # set +x 00:16:25.134 02:13:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:25.134 02:13:39 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:25.134 02:13:39 -- target/bdevio.sh@30 -- # nvmftestfini 00:16:25.134 02:13:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:25.134 02:13:39 -- nvmf/common.sh@116 -- # sync 00:16:25.134 02:13:39 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:25.134 02:13:39 -- nvmf/common.sh@119 -- # set +e 00:16:25.134 02:13:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:25.134 02:13:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:25.134 rmmod nvme_tcp 00:16:25.134 rmmod nvme_fabrics 00:16:25.134 rmmod nvme_keyring 00:16:25.134 02:13:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:25.134 02:13:39 -- nvmf/common.sh@123 -- # set -e 00:16:25.134 02:13:39 -- nvmf/common.sh@124 -- # return 0 00:16:25.134 02:13:39 -- nvmf/common.sh@477 -- # '[' -n 75281 ']' 00:16:25.134 02:13:39 -- nvmf/common.sh@478 -- # killprocess 75281 00:16:25.134 02:13:39 -- common/autotest_common.sh@926 -- # '[' -z 75281 ']' 00:16:25.134 02:13:39 -- common/autotest_common.sh@930 -- # kill -0 75281 00:16:25.134 02:13:39 -- common/autotest_common.sh@931 -- # uname 00:16:25.134 02:13:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:25.134 02:13:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 75281 00:16:25.134 02:13:39 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:16:25.134 02:13:39 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:16:25.134 killing process with pid 75281 00:16:25.134 02:13:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 75281' 00:16:25.134 02:13:39 -- common/autotest_common.sh@945 -- # kill 75281 00:16:25.134 02:13:39 -- common/autotest_common.sh@950 -- # wait 75281 00:16:25.392 02:13:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:25.392 02:13:39 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:25.392 02:13:39 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:25.392 02:13:39 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:25.392 02:13:39 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:25.392 02:13:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:25.392 02:13:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:25.392 02:13:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:25.392 02:13:39 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:25.392 00:16:25.392 real 0m2.985s 00:16:25.392 user 0m10.582s 00:16:25.392 sys 0m0.696s 00:16:25.392 02:13:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:25.393 02:13:39 -- common/autotest_common.sh@10 -- # set +x 00:16:25.393 ************************************ 00:16:25.393 END TEST nvmf_bdevio 00:16:25.393 ************************************ 00:16:25.393 02:13:39 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:16:25.393 02:13:39 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:25.393 02:13:39 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:16:25.393 02:13:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:25.393 02:13:39 -- common/autotest_common.sh@10 -- # set +x 00:16:25.393 ************************************ 00:16:25.393 START TEST nvmf_bdevio_no_huge 00:16:25.393 ************************************ 00:16:25.393 02:13:39 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:25.650 * Looking for test storage... 00:16:25.650 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:25.650 02:13:40 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:25.650 02:13:40 -- nvmf/common.sh@7 -- # uname -s 00:16:25.650 02:13:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:25.650 02:13:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:25.651 02:13:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:25.651 02:13:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:25.651 02:13:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:25.651 02:13:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:25.651 02:13:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:25.651 02:13:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:25.651 02:13:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:25.651 02:13:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:25.651 02:13:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:16:25.651 02:13:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:16:25.651 02:13:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:25.651 02:13:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:25.651 02:13:40 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:25.651 02:13:40 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:25.651 02:13:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:25.651 02:13:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:25.651 02:13:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:25.651 02:13:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.651 02:13:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.651 02:13:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.651 02:13:40 -- paths/export.sh@5 -- # export PATH 00:16:25.651 02:13:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.651 02:13:40 -- nvmf/common.sh@46 -- # : 0 00:16:25.651 02:13:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:25.651 02:13:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:25.651 02:13:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:25.651 02:13:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:25.651 02:13:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:25.651 02:13:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:25.651 02:13:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:25.651 02:13:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:25.651 02:13:40 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:25.651 02:13:40 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:25.651 02:13:40 -- target/bdevio.sh@14 -- # nvmftestinit 00:16:25.651 02:13:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:25.651 02:13:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:25.651 02:13:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:25.651 02:13:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:25.651 02:13:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:25.651 02:13:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:25.651 02:13:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:25.651 02:13:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:25.651 02:13:40 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:25.651 02:13:40 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:25.651 02:13:40 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:25.651 02:13:40 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:25.651 02:13:40 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:25.651 02:13:40 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:25.651 02:13:40 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:25.651 02:13:40 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:25.651 02:13:40 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:25.651 02:13:40 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:25.651 02:13:40 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:25.651 02:13:40 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:25.651 02:13:40 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:25.651 02:13:40 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:25.651 02:13:40 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:25.651 02:13:40 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:25.651 02:13:40 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:25.651 02:13:40 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:25.651 02:13:40 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:25.651 02:13:40 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:25.651 Cannot find device "nvmf_tgt_br" 00:16:25.651 02:13:40 -- nvmf/common.sh@154 -- # true 00:16:25.651 02:13:40 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:25.651 Cannot find device "nvmf_tgt_br2" 00:16:25.651 02:13:40 -- nvmf/common.sh@155 -- # true 00:16:25.651 02:13:40 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:25.651 02:13:40 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:25.651 Cannot find device "nvmf_tgt_br" 00:16:25.651 02:13:40 -- nvmf/common.sh@157 -- # true 00:16:25.651 02:13:40 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:25.651 Cannot find device "nvmf_tgt_br2" 00:16:25.651 02:13:40 -- nvmf/common.sh@158 -- # true 00:16:25.651 02:13:40 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:25.651 02:13:40 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:25.651 02:13:40 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:25.651 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:25.651 02:13:40 -- nvmf/common.sh@161 -- # true 00:16:25.651 02:13:40 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:25.651 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:25.651 02:13:40 -- nvmf/common.sh@162 -- # true 00:16:25.651 02:13:40 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:25.651 02:13:40 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:25.651 02:13:40 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:25.651 02:13:40 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:25.651 02:13:40 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:25.651 02:13:40 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:25.651 02:13:40 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:25.651 02:13:40 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:25.910 02:13:40 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:25.910 02:13:40 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:25.910 02:13:40 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:25.910 02:13:40 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:25.910 02:13:40 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:25.910 02:13:40 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:25.910 02:13:40 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:25.910 02:13:40 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:25.910 02:13:40 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:25.910 02:13:40 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:25.910 02:13:40 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:25.910 02:13:40 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:25.910 02:13:40 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:25.910 02:13:40 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:25.910 02:13:40 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:25.910 02:13:40 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:25.910 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:25.910 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:16:25.910 00:16:25.910 --- 10.0.0.2 ping statistics --- 00:16:25.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:25.910 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:16:25.910 02:13:40 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:25.910 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:25.910 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:16:25.910 00:16:25.910 --- 10.0.0.3 ping statistics --- 00:16:25.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:25.910 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:16:25.910 02:13:40 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:25.910 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:25.910 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:16:25.910 00:16:25.910 --- 10.0.0.1 ping statistics --- 00:16:25.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:25.910 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:16:25.910 02:13:40 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:25.910 02:13:40 -- nvmf/common.sh@421 -- # return 0 00:16:25.910 02:13:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:25.910 02:13:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:25.910 02:13:40 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:25.910 02:13:40 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:25.910 02:13:40 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:25.910 02:13:40 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:25.910 02:13:40 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:25.910 02:13:40 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:25.910 02:13:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:25.910 02:13:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:25.910 02:13:40 -- common/autotest_common.sh@10 -- # set +x 00:16:25.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:25.910 02:13:40 -- nvmf/common.sh@469 -- # nvmfpid=75514 00:16:25.910 02:13:40 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:16:25.910 02:13:40 -- nvmf/common.sh@470 -- # waitforlisten 75514 00:16:25.910 02:13:40 -- common/autotest_common.sh@819 -- # '[' -z 75514 ']' 00:16:25.910 02:13:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:25.910 02:13:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:25.910 02:13:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:25.910 02:13:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:25.910 02:13:40 -- common/autotest_common.sh@10 -- # set +x 00:16:25.910 [2024-05-14 02:13:40.459684] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:16:25.910 [2024-05-14 02:13:40.459811] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:16:26.168 [2024-05-14 02:13:40.606004] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:26.168 [2024-05-14 02:13:40.718818] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:26.168 [2024-05-14 02:13:40.718963] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:26.168 [2024-05-14 02:13:40.718977] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:26.168 [2024-05-14 02:13:40.718986] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:26.168 [2024-05-14 02:13:40.719141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:26.168 [2024-05-14 02:13:40.719375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:26.168 [2024-05-14 02:13:40.719515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:26.168 [2024-05-14 02:13:40.719518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:27.101 02:13:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:27.101 02:13:41 -- common/autotest_common.sh@852 -- # return 0 00:16:27.101 02:13:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:27.101 02:13:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:27.101 02:13:41 -- common/autotest_common.sh@10 -- # set +x 00:16:27.101 02:13:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:27.101 02:13:41 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:27.101 02:13:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:27.101 02:13:41 -- common/autotest_common.sh@10 -- # set +x 00:16:27.101 [2024-05-14 02:13:41.459694] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:27.101 02:13:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:27.101 02:13:41 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:27.101 02:13:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:27.101 02:13:41 -- common/autotest_common.sh@10 -- # set +x 00:16:27.101 Malloc0 00:16:27.101 02:13:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:27.101 02:13:41 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:27.101 02:13:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:27.101 02:13:41 -- common/autotest_common.sh@10 -- # set +x 00:16:27.101 02:13:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:27.101 02:13:41 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:27.101 02:13:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:27.101 02:13:41 -- common/autotest_common.sh@10 -- # set +x 00:16:27.101 02:13:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:27.101 02:13:41 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:27.101 02:13:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:27.101 02:13:41 -- common/autotest_common.sh@10 -- # set +x 00:16:27.101 [2024-05-14 02:13:41.501551] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:27.101 02:13:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:27.101 02:13:41 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:16:27.101 02:13:41 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:27.101 02:13:41 -- nvmf/common.sh@520 -- # config=() 00:16:27.101 02:13:41 -- nvmf/common.sh@520 -- # local subsystem config 00:16:27.101 02:13:41 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:27.101 02:13:41 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:27.101 { 00:16:27.101 "params": { 00:16:27.101 "name": "Nvme$subsystem", 00:16:27.101 "trtype": "$TEST_TRANSPORT", 00:16:27.101 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:27.101 "adrfam": "ipv4", 00:16:27.101 "trsvcid": "$NVMF_PORT", 00:16:27.101 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:27.101 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:27.101 "hdgst": ${hdgst:-false}, 00:16:27.101 "ddgst": ${ddgst:-false} 00:16:27.101 }, 00:16:27.101 "method": "bdev_nvme_attach_controller" 00:16:27.101 } 00:16:27.101 EOF 00:16:27.101 )") 00:16:27.101 02:13:41 -- nvmf/common.sh@542 -- # cat 00:16:27.101 02:13:41 -- nvmf/common.sh@544 -- # jq . 00:16:27.101 02:13:41 -- nvmf/common.sh@545 -- # IFS=, 00:16:27.101 02:13:41 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:27.101 "params": { 00:16:27.101 "name": "Nvme1", 00:16:27.101 "trtype": "tcp", 00:16:27.101 "traddr": "10.0.0.2", 00:16:27.101 "adrfam": "ipv4", 00:16:27.101 "trsvcid": "4420", 00:16:27.101 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:27.101 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:27.101 "hdgst": false, 00:16:27.101 "ddgst": false 00:16:27.101 }, 00:16:27.101 "method": "bdev_nvme_attach_controller" 00:16:27.101 }' 00:16:27.101 [2024-05-14 02:13:41.568308] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:16:27.101 [2024-05-14 02:13:41.568403] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid75568 ] 00:16:27.359 [2024-05-14 02:13:41.718881] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:27.359 [2024-05-14 02:13:41.860946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:27.359 [2024-05-14 02:13:41.861023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:27.359 [2024-05-14 02:13:41.861033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.618 [2024-05-14 02:13:42.073166] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:16:27.618 [2024-05-14 02:13:42.073214] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:16:27.618 I/O targets: 00:16:27.618 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:27.618 00:16:27.618 00:16:27.618 CUnit - A unit testing framework for C - Version 2.1-3 00:16:27.618 http://cunit.sourceforge.net/ 00:16:27.618 00:16:27.618 00:16:27.618 Suite: bdevio tests on: Nvme1n1 00:16:27.618 Test: blockdev write read block ...passed 00:16:27.618 Test: blockdev write zeroes read block ...passed 00:16:27.618 Test: blockdev write zeroes read no split ...passed 00:16:27.618 Test: blockdev write zeroes read split ...passed 00:16:27.618 Test: blockdev write zeroes read split partial ...passed 00:16:27.618 Test: blockdev reset ...[2024-05-14 02:13:42.205937] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:27.618 [2024-05-14 02:13:42.206052] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bcbba0 (9): Bad file descriptor 00:16:27.877 [2024-05-14 02:13:42.220811] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:27.877 passed 00:16:27.877 Test: blockdev write read 8 blocks ...passed 00:16:27.877 Test: blockdev write read size > 128k ...passed 00:16:27.877 Test: blockdev write read invalid size ...passed 00:16:27.877 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:27.877 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:27.877 Test: blockdev write read max offset ...passed 00:16:27.877 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:27.877 Test: blockdev writev readv 8 blocks ...passed 00:16:27.877 Test: blockdev writev readv 30 x 1block ...passed 00:16:27.877 Test: blockdev writev readv block ...passed 00:16:27.877 Test: blockdev writev readv size > 128k ...passed 00:16:27.877 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:27.877 Test: blockdev comparev and writev ...[2024-05-14 02:13:42.395859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:27.877 [2024-05-14 02:13:42.396066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:27.877 [2024-05-14 02:13:42.396164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:27.877 [2024-05-14 02:13:42.396244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.877 [2024-05-14 02:13:42.396805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:27.877 [2024-05-14 02:13:42.396910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:27.877 [2024-05-14 02:13:42.396989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:27.877 [2024-05-14 02:13:42.397062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:27.877 [2024-05-14 02:13:42.397544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:27.877 [2024-05-14 02:13:42.397645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:27.877 [2024-05-14 02:13:42.397783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:27.877 [2024-05-14 02:13:42.397878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:27.877 [2024-05-14 02:13:42.398358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:27.877 [2024-05-14 02:13:42.398445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:27.877 [2024-05-14 02:13:42.398526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:27.877 [2024-05-14 02:13:42.398596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:27.877 passed 00:16:28.135 Test: blockdev nvme passthru rw ...passed 00:16:28.136 Test: blockdev nvme passthru vendor specific ...[2024-05-14 02:13:42.482057] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:28.136 [2024-05-14 02:13:42.482220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:28.136 [2024-05-14 02:13:42.483023] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:28.136 [2024-05-14 02:13:42.483134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:28.136 [2024-05-14 02:13:42.483885] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:28.136 [2024-05-14 02:13:42.483994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:28.136 [2024-05-14 02:13:42.484687] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:28.136 [2024-05-14 02:13:42.484817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:28.136 passed 00:16:28.136 Test: blockdev nvme admin passthru ...passed 00:16:28.136 Test: blockdev copy ...passed 00:16:28.136 00:16:28.136 Run Summary: Type Total Ran Passed Failed Inactive 00:16:28.136 suites 1 1 n/a 0 0 00:16:28.136 tests 23 23 23 0 0 00:16:28.136 asserts 152 152 152 0 n/a 00:16:28.136 00:16:28.136 Elapsed time = 0.933 seconds 00:16:28.394 02:13:42 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:28.394 02:13:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:28.394 02:13:42 -- common/autotest_common.sh@10 -- # set +x 00:16:28.394 02:13:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:28.394 02:13:42 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:28.394 02:13:42 -- target/bdevio.sh@30 -- # nvmftestfini 00:16:28.394 02:13:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:28.394 02:13:42 -- nvmf/common.sh@116 -- # sync 00:16:28.652 02:13:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:28.652 02:13:43 -- nvmf/common.sh@119 -- # set +e 00:16:28.652 02:13:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:28.652 02:13:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:28.652 rmmod nvme_tcp 00:16:28.652 rmmod nvme_fabrics 00:16:28.652 rmmod nvme_keyring 00:16:28.652 02:13:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:28.652 02:13:43 -- nvmf/common.sh@123 -- # set -e 00:16:28.652 02:13:43 -- nvmf/common.sh@124 -- # return 0 00:16:28.652 02:13:43 -- nvmf/common.sh@477 -- # '[' -n 75514 ']' 00:16:28.652 02:13:43 -- nvmf/common.sh@478 -- # killprocess 75514 00:16:28.652 02:13:43 -- common/autotest_common.sh@926 -- # '[' -z 75514 ']' 00:16:28.652 02:13:43 -- common/autotest_common.sh@930 -- # kill -0 75514 00:16:28.652 02:13:43 -- common/autotest_common.sh@931 -- # uname 00:16:28.652 02:13:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:28.652 02:13:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 75514 00:16:28.652 02:13:43 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:16:28.652 02:13:43 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:16:28.652 killing process with pid 75514 00:16:28.652 02:13:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 75514' 00:16:28.652 02:13:43 -- common/autotest_common.sh@945 -- # kill 75514 00:16:28.652 02:13:43 -- common/autotest_common.sh@950 -- # wait 75514 00:16:28.911 02:13:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:28.911 02:13:43 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:28.911 02:13:43 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:28.911 02:13:43 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:28.911 02:13:43 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:28.911 02:13:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:28.911 02:13:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:28.911 02:13:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:29.170 02:13:43 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:29.170 00:16:29.170 real 0m3.587s 00:16:29.170 user 0m13.050s 00:16:29.170 sys 0m1.338s 00:16:29.170 02:13:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:29.170 02:13:43 -- common/autotest_common.sh@10 -- # set +x 00:16:29.170 ************************************ 00:16:29.170 END TEST nvmf_bdevio_no_huge 00:16:29.170 ************************************ 00:16:29.170 02:13:43 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:29.170 02:13:43 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:29.170 02:13:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:29.170 02:13:43 -- common/autotest_common.sh@10 -- # set +x 00:16:29.170 ************************************ 00:16:29.170 START TEST nvmf_tls 00:16:29.170 ************************************ 00:16:29.170 02:13:43 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:29.170 * Looking for test storage... 00:16:29.170 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:29.170 02:13:43 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:29.170 02:13:43 -- nvmf/common.sh@7 -- # uname -s 00:16:29.170 02:13:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:29.170 02:13:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:29.170 02:13:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:29.170 02:13:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:29.170 02:13:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:29.170 02:13:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:29.170 02:13:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:29.170 02:13:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:29.170 02:13:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:29.170 02:13:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:29.170 02:13:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:16:29.170 02:13:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:16:29.170 02:13:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:29.170 02:13:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:29.170 02:13:43 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:29.170 02:13:43 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:29.170 02:13:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:29.170 02:13:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:29.170 02:13:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:29.170 02:13:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.170 02:13:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.170 02:13:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.170 02:13:43 -- paths/export.sh@5 -- # export PATH 00:16:29.170 02:13:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.170 02:13:43 -- nvmf/common.sh@46 -- # : 0 00:16:29.170 02:13:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:29.170 02:13:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:29.170 02:13:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:29.170 02:13:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:29.170 02:13:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:29.170 02:13:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:29.171 02:13:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:29.171 02:13:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:29.171 02:13:43 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:29.171 02:13:43 -- target/tls.sh@71 -- # nvmftestinit 00:16:29.171 02:13:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:29.171 02:13:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:29.171 02:13:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:29.171 02:13:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:29.171 02:13:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:29.171 02:13:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:29.171 02:13:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:29.171 02:13:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:29.171 02:13:43 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:29.171 02:13:43 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:29.171 02:13:43 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:29.171 02:13:43 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:29.171 02:13:43 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:29.171 02:13:43 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:29.171 02:13:43 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:29.171 02:13:43 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:29.171 02:13:43 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:29.171 02:13:43 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:29.171 02:13:43 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:29.171 02:13:43 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:29.171 02:13:43 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:29.171 02:13:43 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:29.171 02:13:43 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:29.171 02:13:43 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:29.171 02:13:43 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:29.171 02:13:43 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:29.171 02:13:43 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:29.171 02:13:43 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:29.171 Cannot find device "nvmf_tgt_br" 00:16:29.171 02:13:43 -- nvmf/common.sh@154 -- # true 00:16:29.171 02:13:43 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:29.171 Cannot find device "nvmf_tgt_br2" 00:16:29.171 02:13:43 -- nvmf/common.sh@155 -- # true 00:16:29.171 02:13:43 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:29.171 02:13:43 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:29.171 Cannot find device "nvmf_tgt_br" 00:16:29.171 02:13:43 -- nvmf/common.sh@157 -- # true 00:16:29.171 02:13:43 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:29.171 Cannot find device "nvmf_tgt_br2" 00:16:29.171 02:13:43 -- nvmf/common.sh@158 -- # true 00:16:29.171 02:13:43 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:29.430 02:13:43 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:29.430 02:13:43 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:29.430 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:29.430 02:13:43 -- nvmf/common.sh@161 -- # true 00:16:29.430 02:13:43 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:29.430 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:29.430 02:13:43 -- nvmf/common.sh@162 -- # true 00:16:29.430 02:13:43 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:29.430 02:13:43 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:29.430 02:13:43 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:29.430 02:13:43 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:29.430 02:13:43 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:29.430 02:13:43 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:29.430 02:13:43 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:29.430 02:13:43 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:29.430 02:13:43 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:29.430 02:13:43 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:29.430 02:13:43 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:29.430 02:13:43 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:29.430 02:13:43 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:29.430 02:13:43 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:29.430 02:13:43 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:29.430 02:13:43 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:29.430 02:13:43 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:29.430 02:13:43 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:29.430 02:13:43 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:29.430 02:13:43 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:29.430 02:13:43 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:29.430 02:13:43 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:29.430 02:13:43 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:29.430 02:13:43 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:29.430 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:29.430 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:16:29.430 00:16:29.430 --- 10.0.0.2 ping statistics --- 00:16:29.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:29.430 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:16:29.430 02:13:44 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:29.430 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:29.430 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:16:29.430 00:16:29.430 --- 10.0.0.3 ping statistics --- 00:16:29.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:29.431 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:16:29.431 02:13:44 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:29.431 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:29.431 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:16:29.431 00:16:29.431 --- 10.0.0.1 ping statistics --- 00:16:29.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:29.431 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:16:29.431 02:13:44 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:29.431 02:13:44 -- nvmf/common.sh@421 -- # return 0 00:16:29.431 02:13:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:29.431 02:13:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:29.431 02:13:44 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:29.431 02:13:44 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:29.431 02:13:44 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:29.431 02:13:44 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:29.431 02:13:44 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:29.689 02:13:44 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:16:29.689 02:13:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:29.689 02:13:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:29.689 02:13:44 -- common/autotest_common.sh@10 -- # set +x 00:16:29.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:29.689 02:13:44 -- nvmf/common.sh@469 -- # nvmfpid=75754 00:16:29.689 02:13:44 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:16:29.689 02:13:44 -- nvmf/common.sh@470 -- # waitforlisten 75754 00:16:29.689 02:13:44 -- common/autotest_common.sh@819 -- # '[' -z 75754 ']' 00:16:29.689 02:13:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:29.689 02:13:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:29.689 02:13:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:29.689 02:13:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:29.689 02:13:44 -- common/autotest_common.sh@10 -- # set +x 00:16:29.689 [2024-05-14 02:13:44.085130] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:16:29.689 [2024-05-14 02:13:44.085244] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:29.689 [2024-05-14 02:13:44.219855] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:29.689 [2024-05-14 02:13:44.276254] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:29.689 [2024-05-14 02:13:44.276407] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:29.689 [2024-05-14 02:13:44.276420] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:29.689 [2024-05-14 02:13:44.276429] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:29.689 [2024-05-14 02:13:44.276459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:29.948 02:13:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:29.948 02:13:44 -- common/autotest_common.sh@852 -- # return 0 00:16:29.948 02:13:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:29.948 02:13:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:29.948 02:13:44 -- common/autotest_common.sh@10 -- # set +x 00:16:29.948 02:13:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:29.948 02:13:44 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:16:29.948 02:13:44 -- target/tls.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:16:30.207 true 00:16:30.207 02:13:44 -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:30.207 02:13:44 -- target/tls.sh@82 -- # jq -r .tls_version 00:16:30.488 02:13:44 -- target/tls.sh@82 -- # version=0 00:16:30.488 02:13:44 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:16:30.488 02:13:44 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:30.749 02:13:45 -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:30.749 02:13:45 -- target/tls.sh@90 -- # jq -r .tls_version 00:16:31.007 02:13:45 -- target/tls.sh@90 -- # version=13 00:16:31.007 02:13:45 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:16:31.007 02:13:45 -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:16:31.266 02:13:45 -- target/tls.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:31.266 02:13:45 -- target/tls.sh@98 -- # jq -r .tls_version 00:16:31.524 02:13:45 -- target/tls.sh@98 -- # version=7 00:16:31.524 02:13:45 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:16:31.524 02:13:45 -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:31.524 02:13:45 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:16:31.782 02:13:46 -- target/tls.sh@105 -- # ktls=false 00:16:31.782 02:13:46 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:16:31.782 02:13:46 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:16:32.041 02:13:46 -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:32.041 02:13:46 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:16:32.299 02:13:46 -- target/tls.sh@113 -- # ktls=true 00:16:32.299 02:13:46 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:16:32.299 02:13:46 -- target/tls.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:16:32.558 02:13:46 -- target/tls.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:32.558 02:13:46 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:16:32.816 02:13:47 -- target/tls.sh@121 -- # ktls=false 00:16:32.816 02:13:47 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:16:32.816 02:13:47 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:16:32.816 02:13:47 -- target/tls.sh@49 -- # local key hash crc 00:16:32.816 02:13:47 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:16:32.816 02:13:47 -- target/tls.sh@51 -- # hash=01 00:16:32.816 02:13:47 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:16:32.816 02:13:47 -- target/tls.sh@52 -- # gzip -1 -c 00:16:32.816 02:13:47 -- target/tls.sh@52 -- # tail -c8 00:16:32.816 02:13:47 -- target/tls.sh@52 -- # head -c 4 00:16:32.816 02:13:47 -- target/tls.sh@52 -- # crc='p$H�' 00:16:32.816 02:13:47 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:16:32.816 02:13:47 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:16:32.816 02:13:47 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:32.816 02:13:47 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:32.816 02:13:47 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:16:32.816 02:13:47 -- target/tls.sh@49 -- # local key hash crc 00:16:32.816 02:13:47 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:16:32.816 02:13:47 -- target/tls.sh@51 -- # hash=01 00:16:32.816 02:13:47 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:16:32.816 02:13:47 -- target/tls.sh@52 -- # gzip -1 -c 00:16:32.816 02:13:47 -- target/tls.sh@52 -- # tail -c8 00:16:32.816 02:13:47 -- target/tls.sh@52 -- # head -c 4 00:16:32.816 02:13:47 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:16:32.816 02:13:47 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:16:32.816 02:13:47 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:16:32.816 02:13:47 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:32.816 02:13:47 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:32.816 02:13:47 -- target/tls.sh@130 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:32.816 02:13:47 -- target/tls.sh@131 -- # key_2_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:32.816 02:13:47 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:32.816 02:13:47 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:32.816 02:13:47 -- target/tls.sh@136 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:32.816 02:13:47 -- target/tls.sh@137 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:32.817 02:13:47 -- target/tls.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:33.074 02:13:47 -- target/tls.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:16:33.332 02:13:47 -- target/tls.sh@142 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:33.332 02:13:47 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:33.332 02:13:47 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:33.590 [2024-05-14 02:13:48.019455] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:33.590 02:13:48 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:33.849 02:13:48 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:34.107 [2024-05-14 02:13:48.519577] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:34.107 [2024-05-14 02:13:48.519792] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:34.107 02:13:48 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:34.365 malloc0 00:16:34.365 02:13:48 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:34.623 02:13:48 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:34.882 02:13:49 -- target/tls.sh@146 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:44.850 Initializing NVMe Controllers 00:16:44.850 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:44.850 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:44.850 Initialization complete. Launching workers. 00:16:44.850 ======================================================== 00:16:44.850 Latency(us) 00:16:44.850 Device Information : IOPS MiB/s Average min max 00:16:44.850 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9874.14 38.57 6482.78 2645.84 10911.30 00:16:44.850 ======================================================== 00:16:44.850 Total : 9874.14 38.57 6482.78 2645.84 10911.30 00:16:44.850 00:16:44.850 02:13:59 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:44.850 02:13:59 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:44.850 02:13:59 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:44.850 02:13:59 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:44.850 02:13:59 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:16:44.850 02:13:59 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:44.850 02:13:59 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:44.850 02:13:59 -- target/tls.sh@28 -- # bdevperf_pid=76105 00:16:44.850 02:13:59 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:44.850 02:13:59 -- target/tls.sh@31 -- # waitforlisten 76105 /var/tmp/bdevperf.sock 00:16:44.850 02:13:59 -- common/autotest_common.sh@819 -- # '[' -z 76105 ']' 00:16:44.850 02:13:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:44.850 02:13:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:44.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:44.850 02:13:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:44.850 02:13:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:44.850 02:13:59 -- common/autotest_common.sh@10 -- # set +x 00:16:45.109 [2024-05-14 02:13:59.469799] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:16:45.109 [2024-05-14 02:13:59.469880] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76105 ] 00:16:45.109 [2024-05-14 02:13:59.604375] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:45.109 [2024-05-14 02:13:59.670233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:46.044 02:14:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:46.044 02:14:00 -- common/autotest_common.sh@852 -- # return 0 00:16:46.044 02:14:00 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:46.303 [2024-05-14 02:14:00.693662] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:46.303 TLSTESTn1 00:16:46.303 02:14:00 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:46.561 Running I/O for 10 seconds... 00:16:56.564 00:16:56.564 Latency(us) 00:16:56.564 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:56.564 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:56.564 Verification LBA range: start 0x0 length 0x2000 00:16:56.564 TLSTESTn1 : 10.02 5360.48 20.94 0.00 0.00 23840.06 6494.02 25976.09 00:16:56.564 =================================================================================================================== 00:16:56.564 Total : 5360.48 20.94 0.00 0.00 23840.06 6494.02 25976.09 00:16:56.564 0 00:16:56.564 02:14:10 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:56.564 02:14:10 -- target/tls.sh@45 -- # killprocess 76105 00:16:56.564 02:14:10 -- common/autotest_common.sh@926 -- # '[' -z 76105 ']' 00:16:56.564 02:14:10 -- common/autotest_common.sh@930 -- # kill -0 76105 00:16:56.564 02:14:10 -- common/autotest_common.sh@931 -- # uname 00:16:56.564 02:14:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:56.564 02:14:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 76105 00:16:56.564 02:14:10 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:16:56.564 02:14:10 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:16:56.564 killing process with pid 76105 00:16:56.565 02:14:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 76105' 00:16:56.565 Received shutdown signal, test time was about 10.000000 seconds 00:16:56.565 00:16:56.565 Latency(us) 00:16:56.565 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:56.565 =================================================================================================================== 00:16:56.565 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:56.565 02:14:10 -- common/autotest_common.sh@945 -- # kill 76105 00:16:56.565 02:14:10 -- common/autotest_common.sh@950 -- # wait 76105 00:16:56.565 02:14:11 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:56.565 02:14:11 -- common/autotest_common.sh@640 -- # local es=0 00:16:56.565 02:14:11 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:56.565 02:14:11 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:16:56.565 02:14:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:56.565 02:14:11 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:16:56.565 02:14:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:56.565 02:14:11 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:56.565 02:14:11 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:56.565 02:14:11 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:56.565 02:14:11 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:56.565 02:14:11 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt' 00:16:56.565 02:14:11 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:56.565 02:14:11 -- target/tls.sh@28 -- # bdevperf_pid=76251 00:16:56.565 02:14:11 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:56.565 02:14:11 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:56.565 02:14:11 -- target/tls.sh@31 -- # waitforlisten 76251 /var/tmp/bdevperf.sock 00:16:56.565 02:14:11 -- common/autotest_common.sh@819 -- # '[' -z 76251 ']' 00:16:56.565 02:14:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:56.565 02:14:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:56.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:56.565 02:14:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:56.565 02:14:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:56.565 02:14:11 -- common/autotest_common.sh@10 -- # set +x 00:16:56.825 [2024-05-14 02:14:11.203615] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:16:56.825 [2024-05-14 02:14:11.203716] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76251 ] 00:16:56.825 [2024-05-14 02:14:11.339027] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.825 [2024-05-14 02:14:11.395139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:57.760 02:14:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:57.760 02:14:12 -- common/autotest_common.sh@852 -- # return 0 00:16:57.760 02:14:12 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:58.057 [2024-05-14 02:14:12.405379] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:58.057 [2024-05-14 02:14:12.410499] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:58.057 [2024-05-14 02:14:12.411100] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19ed570 (107): Transport endpoint is not connected 00:16:58.057 [2024-05-14 02:14:12.412089] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19ed570 (9): Bad file descriptor 00:16:58.057 [2024-05-14 02:14:12.413085] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:58.057 [2024-05-14 02:14:12.413108] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:58.057 [2024-05-14 02:14:12.413121] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:58.057 2024/05/14 02:14:12 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:16:58.057 request: 00:16:58.057 { 00:16:58.057 "method": "bdev_nvme_attach_controller", 00:16:58.057 "params": { 00:16:58.057 "name": "TLSTEST", 00:16:58.057 "trtype": "tcp", 00:16:58.057 "traddr": "10.0.0.2", 00:16:58.057 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:58.057 "adrfam": "ipv4", 00:16:58.057 "trsvcid": "4420", 00:16:58.057 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:58.057 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt" 00:16:58.057 } 00:16:58.057 } 00:16:58.057 Got JSON-RPC error response 00:16:58.057 GoRPCClient: error on JSON-RPC call 00:16:58.057 02:14:12 -- target/tls.sh@36 -- # killprocess 76251 00:16:58.057 02:14:12 -- common/autotest_common.sh@926 -- # '[' -z 76251 ']' 00:16:58.057 02:14:12 -- common/autotest_common.sh@930 -- # kill -0 76251 00:16:58.057 02:14:12 -- common/autotest_common.sh@931 -- # uname 00:16:58.057 02:14:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:58.057 02:14:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 76251 00:16:58.057 02:14:12 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:16:58.057 02:14:12 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:16:58.057 killing process with pid 76251 00:16:58.057 02:14:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 76251' 00:16:58.057 02:14:12 -- common/autotest_common.sh@945 -- # kill 76251 00:16:58.057 Received shutdown signal, test time was about 10.000000 seconds 00:16:58.057 00:16:58.057 Latency(us) 00:16:58.057 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:58.057 =================================================================================================================== 00:16:58.057 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:58.057 02:14:12 -- common/autotest_common.sh@950 -- # wait 76251 00:16:58.316 02:14:12 -- target/tls.sh@37 -- # return 1 00:16:58.316 02:14:12 -- common/autotest_common.sh@643 -- # es=1 00:16:58.316 02:14:12 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:58.316 02:14:12 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:58.316 02:14:12 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:58.316 02:14:12 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:58.316 02:14:12 -- common/autotest_common.sh@640 -- # local es=0 00:16:58.316 02:14:12 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:58.316 02:14:12 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:16:58.316 02:14:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:58.316 02:14:12 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:16:58.316 02:14:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:58.316 02:14:12 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:58.316 02:14:12 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:58.316 02:14:12 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:58.316 02:14:12 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:16:58.316 02:14:12 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:16:58.316 02:14:12 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:58.316 02:14:12 -- target/tls.sh@28 -- # bdevperf_pid=76297 00:16:58.316 02:14:12 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:58.316 02:14:12 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:58.316 02:14:12 -- target/tls.sh@31 -- # waitforlisten 76297 /var/tmp/bdevperf.sock 00:16:58.316 02:14:12 -- common/autotest_common.sh@819 -- # '[' -z 76297 ']' 00:16:58.316 02:14:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:58.316 02:14:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:58.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:58.316 02:14:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:58.316 02:14:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:58.316 02:14:12 -- common/autotest_common.sh@10 -- # set +x 00:16:58.316 [2024-05-14 02:14:12.706304] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:16:58.316 [2024-05-14 02:14:12.706438] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76297 ] 00:16:58.316 [2024-05-14 02:14:12.845205] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.574 [2024-05-14 02:14:12.913552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:59.141 02:14:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:59.141 02:14:13 -- common/autotest_common.sh@852 -- # return 0 00:16:59.141 02:14:13 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:59.399 [2024-05-14 02:14:13.956754] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:59.399 [2024-05-14 02:14:13.961728] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:59.399 [2024-05-14 02:14:13.961784] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:59.399 [2024-05-14 02:14:13.961840] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:59.399 [2024-05-14 02:14:13.962440] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x103f570 (107): Transport endpoint is not connected 00:16:59.399 [2024-05-14 02:14:13.963426] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x103f570 (9): Bad file descriptor 00:16:59.399 [2024-05-14 02:14:13.964424] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:59.399 [2024-05-14 02:14:13.964445] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:59.399 [2024-05-14 02:14:13.964458] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:59.399 2024/05/14 02:14:13 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:16:59.399 request: 00:16:59.399 { 00:16:59.399 "method": "bdev_nvme_attach_controller", 00:16:59.399 "params": { 00:16:59.399 "name": "TLSTEST", 00:16:59.399 "trtype": "tcp", 00:16:59.399 "traddr": "10.0.0.2", 00:16:59.399 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:59.399 "adrfam": "ipv4", 00:16:59.399 "trsvcid": "4420", 00:16:59.399 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:59.399 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt" 00:16:59.399 } 00:16:59.399 } 00:16:59.399 Got JSON-RPC error response 00:16:59.399 GoRPCClient: error on JSON-RPC call 00:16:59.399 02:14:13 -- target/tls.sh@36 -- # killprocess 76297 00:16:59.399 02:14:13 -- common/autotest_common.sh@926 -- # '[' -z 76297 ']' 00:16:59.399 02:14:13 -- common/autotest_common.sh@930 -- # kill -0 76297 00:16:59.399 02:14:13 -- common/autotest_common.sh@931 -- # uname 00:16:59.658 02:14:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:59.658 02:14:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 76297 00:16:59.658 02:14:14 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:16:59.658 02:14:14 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:16:59.658 02:14:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 76297' 00:16:59.658 killing process with pid 76297 00:16:59.658 02:14:14 -- common/autotest_common.sh@945 -- # kill 76297 00:16:59.658 Received shutdown signal, test time was about 10.000000 seconds 00:16:59.658 00:16:59.658 Latency(us) 00:16:59.658 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:59.658 =================================================================================================================== 00:16:59.658 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:59.658 02:14:14 -- common/autotest_common.sh@950 -- # wait 76297 00:16:59.658 02:14:14 -- target/tls.sh@37 -- # return 1 00:16:59.658 02:14:14 -- common/autotest_common.sh@643 -- # es=1 00:16:59.658 02:14:14 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:59.658 02:14:14 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:59.658 02:14:14 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:59.658 02:14:14 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:59.658 02:14:14 -- common/autotest_common.sh@640 -- # local es=0 00:16:59.658 02:14:14 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:59.658 02:14:14 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:16:59.658 02:14:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:59.658 02:14:14 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:16:59.658 02:14:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:59.658 02:14:14 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:59.658 02:14:14 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:59.658 02:14:14 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:16:59.658 02:14:14 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:59.658 02:14:14 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:16:59.658 02:14:14 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:59.658 02:14:14 -- target/tls.sh@28 -- # bdevperf_pid=76341 00:16:59.658 02:14:14 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:59.658 02:14:14 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:59.658 02:14:14 -- target/tls.sh@31 -- # waitforlisten 76341 /var/tmp/bdevperf.sock 00:16:59.658 02:14:14 -- common/autotest_common.sh@819 -- # '[' -z 76341 ']' 00:16:59.658 02:14:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:59.658 02:14:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:59.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:59.658 02:14:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:59.659 02:14:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:59.659 02:14:14 -- common/autotest_common.sh@10 -- # set +x 00:16:59.917 [2024-05-14 02:14:14.261922] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:16:59.917 [2024-05-14 02:14:14.262054] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76341 ] 00:16:59.917 [2024-05-14 02:14:14.406874] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:59.917 [2024-05-14 02:14:14.491980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:00.854 02:14:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:00.854 02:14:15 -- common/autotest_common.sh@852 -- # return 0 00:17:00.854 02:14:15 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:01.113 [2024-05-14 02:14:15.515727] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:01.113 [2024-05-14 02:14:15.520782] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:01.113 [2024-05-14 02:14:15.520822] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:01.113 [2024-05-14 02:14:15.520875] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:01.113 [2024-05-14 02:14:15.521491] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad6570 (107): Transport endpoint is not connected 00:17:01.113 [2024-05-14 02:14:15.522478] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad6570 (9): Bad file descriptor 00:17:01.113 [2024-05-14 02:14:15.523474] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:17:01.114 [2024-05-14 02:14:15.523493] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:01.114 [2024-05-14 02:14:15.523522] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:01.114 2024/05/14 02:14:15 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:01.114 request: 00:17:01.114 { 00:17:01.114 "method": "bdev_nvme_attach_controller", 00:17:01.114 "params": { 00:17:01.114 "name": "TLSTEST", 00:17:01.114 "trtype": "tcp", 00:17:01.114 "traddr": "10.0.0.2", 00:17:01.114 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:01.114 "adrfam": "ipv4", 00:17:01.114 "trsvcid": "4420", 00:17:01.114 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:01.114 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt" 00:17:01.114 } 00:17:01.114 } 00:17:01.114 Got JSON-RPC error response 00:17:01.114 GoRPCClient: error on JSON-RPC call 00:17:01.114 02:14:15 -- target/tls.sh@36 -- # killprocess 76341 00:17:01.114 02:14:15 -- common/autotest_common.sh@926 -- # '[' -z 76341 ']' 00:17:01.114 02:14:15 -- common/autotest_common.sh@930 -- # kill -0 76341 00:17:01.114 02:14:15 -- common/autotest_common.sh@931 -- # uname 00:17:01.114 02:14:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:01.114 02:14:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 76341 00:17:01.114 02:14:15 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:01.114 02:14:15 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:01.114 killing process with pid 76341 00:17:01.114 02:14:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 76341' 00:17:01.114 Received shutdown signal, test time was about 10.000000 seconds 00:17:01.114 00:17:01.114 Latency(us) 00:17:01.114 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:01.114 =================================================================================================================== 00:17:01.114 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:01.114 02:14:15 -- common/autotest_common.sh@945 -- # kill 76341 00:17:01.114 02:14:15 -- common/autotest_common.sh@950 -- # wait 76341 00:17:01.373 02:14:15 -- target/tls.sh@37 -- # return 1 00:17:01.373 02:14:15 -- common/autotest_common.sh@643 -- # es=1 00:17:01.373 02:14:15 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:01.373 02:14:15 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:01.373 02:14:15 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:01.373 02:14:15 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:01.373 02:14:15 -- common/autotest_common.sh@640 -- # local es=0 00:17:01.373 02:14:15 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:01.373 02:14:15 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:17:01.373 02:14:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:01.373 02:14:15 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:17:01.373 02:14:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:01.373 02:14:15 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:01.373 02:14:15 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:01.373 02:14:15 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:01.373 02:14:15 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:01.373 02:14:15 -- target/tls.sh@23 -- # psk= 00:17:01.373 02:14:15 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:01.373 02:14:15 -- target/tls.sh@28 -- # bdevperf_pid=76388 00:17:01.373 02:14:15 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:01.373 02:14:15 -- target/tls.sh@31 -- # waitforlisten 76388 /var/tmp/bdevperf.sock 00:17:01.373 02:14:15 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:01.373 02:14:15 -- common/autotest_common.sh@819 -- # '[' -z 76388 ']' 00:17:01.373 02:14:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:01.373 02:14:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:01.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:01.373 02:14:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:01.373 02:14:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:01.373 02:14:15 -- common/autotest_common.sh@10 -- # set +x 00:17:01.373 [2024-05-14 02:14:15.810936] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:17:01.373 [2024-05-14 02:14:15.811045] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76388 ] 00:17:01.373 [2024-05-14 02:14:15.947457] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.632 [2024-05-14 02:14:16.017161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:02.567 02:14:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:02.567 02:14:16 -- common/autotest_common.sh@852 -- # return 0 00:17:02.567 02:14:16 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:02.567 [2024-05-14 02:14:17.110680] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:02.567 [2024-05-14 02:14:17.112542] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200f170 (9): Bad file descriptor 00:17:02.567 [2024-05-14 02:14:17.113536] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:02.567 [2024-05-14 02:14:17.113568] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:02.567 [2024-05-14 02:14:17.113588] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:02.567 2024/05/14 02:14:17 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:02.567 request: 00:17:02.567 { 00:17:02.567 "method": "bdev_nvme_attach_controller", 00:17:02.567 "params": { 00:17:02.567 "name": "TLSTEST", 00:17:02.567 "trtype": "tcp", 00:17:02.567 "traddr": "10.0.0.2", 00:17:02.567 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:02.567 "adrfam": "ipv4", 00:17:02.567 "trsvcid": "4420", 00:17:02.567 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:17:02.567 } 00:17:02.567 } 00:17:02.567 Got JSON-RPC error response 00:17:02.567 GoRPCClient: error on JSON-RPC call 00:17:02.567 02:14:17 -- target/tls.sh@36 -- # killprocess 76388 00:17:02.567 02:14:17 -- common/autotest_common.sh@926 -- # '[' -z 76388 ']' 00:17:02.567 02:14:17 -- common/autotest_common.sh@930 -- # kill -0 76388 00:17:02.567 02:14:17 -- common/autotest_common.sh@931 -- # uname 00:17:02.567 02:14:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:02.567 02:14:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 76388 00:17:02.827 killing process with pid 76388 00:17:02.827 Received shutdown signal, test time was about 10.000000 seconds 00:17:02.827 00:17:02.827 Latency(us) 00:17:02.827 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:02.827 =================================================================================================================== 00:17:02.827 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:02.827 02:14:17 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:02.827 02:14:17 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:02.827 02:14:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 76388' 00:17:02.827 02:14:17 -- common/autotest_common.sh@945 -- # kill 76388 00:17:02.827 02:14:17 -- common/autotest_common.sh@950 -- # wait 76388 00:17:02.827 02:14:17 -- target/tls.sh@37 -- # return 1 00:17:02.827 02:14:17 -- common/autotest_common.sh@643 -- # es=1 00:17:02.827 02:14:17 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:02.827 02:14:17 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:02.827 02:14:17 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:02.827 02:14:17 -- target/tls.sh@167 -- # killprocess 75754 00:17:02.827 02:14:17 -- common/autotest_common.sh@926 -- # '[' -z 75754 ']' 00:17:02.827 02:14:17 -- common/autotest_common.sh@930 -- # kill -0 75754 00:17:02.827 02:14:17 -- common/autotest_common.sh@931 -- # uname 00:17:02.827 02:14:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:02.827 02:14:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 75754 00:17:02.827 killing process with pid 75754 00:17:02.827 02:14:17 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:02.827 02:14:17 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:02.827 02:14:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 75754' 00:17:02.827 02:14:17 -- common/autotest_common.sh@945 -- # kill 75754 00:17:02.827 02:14:17 -- common/autotest_common.sh@950 -- # wait 75754 00:17:03.086 02:14:17 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:17:03.086 02:14:17 -- target/tls.sh@49 -- # local key hash crc 00:17:03.086 02:14:17 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:17:03.086 02:14:17 -- target/tls.sh@51 -- # hash=02 00:17:03.086 02:14:17 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:17:03.086 02:14:17 -- target/tls.sh@52 -- # gzip -1 -c 00:17:03.086 02:14:17 -- target/tls.sh@52 -- # tail -c8 00:17:03.086 02:14:17 -- target/tls.sh@52 -- # head -c 4 00:17:03.086 02:14:17 -- target/tls.sh@52 -- # crc='�e�'\''' 00:17:03.086 02:14:17 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:17:03.086 02:14:17 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:17:03.086 02:14:17 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:03.086 02:14:17 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:03.086 02:14:17 -- target/tls.sh@169 -- # key_long_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:03.086 02:14:17 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:03.086 02:14:17 -- target/tls.sh@171 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:03.086 02:14:17 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:17:03.086 02:14:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:03.086 02:14:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:03.086 02:14:17 -- common/autotest_common.sh@10 -- # set +x 00:17:03.086 02:14:17 -- nvmf/common.sh@469 -- # nvmfpid=76449 00:17:03.086 02:14:17 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:03.086 02:14:17 -- nvmf/common.sh@470 -- # waitforlisten 76449 00:17:03.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:03.086 02:14:17 -- common/autotest_common.sh@819 -- # '[' -z 76449 ']' 00:17:03.086 02:14:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:03.086 02:14:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:03.086 02:14:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:03.086 02:14:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:03.086 02:14:17 -- common/autotest_common.sh@10 -- # set +x 00:17:03.086 [2024-05-14 02:14:17.648892] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:17:03.086 [2024-05-14 02:14:17.648968] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:03.345 [2024-05-14 02:14:17.779970] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:03.345 [2024-05-14 02:14:17.835210] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:03.345 [2024-05-14 02:14:17.835359] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:03.345 [2024-05-14 02:14:17.835371] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:03.345 [2024-05-14 02:14:17.835380] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:03.345 [2024-05-14 02:14:17.835409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:04.279 02:14:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:04.279 02:14:18 -- common/autotest_common.sh@852 -- # return 0 00:17:04.279 02:14:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:04.279 02:14:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:04.279 02:14:18 -- common/autotest_common.sh@10 -- # set +x 00:17:04.279 02:14:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:04.279 02:14:18 -- target/tls.sh@174 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:04.279 02:14:18 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:04.279 02:14:18 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:04.536 [2024-05-14 02:14:18.970941] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:04.536 02:14:18 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:04.794 02:14:19 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:05.051 [2024-05-14 02:14:19.443049] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:05.051 [2024-05-14 02:14:19.443252] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:05.051 02:14:19 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:05.309 malloc0 00:17:05.309 02:14:19 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:05.567 02:14:19 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:05.834 02:14:20 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:05.834 02:14:20 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:05.834 02:14:20 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:05.834 02:14:20 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:05.834 02:14:20 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:17:05.834 02:14:20 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:05.834 02:14:20 -- target/tls.sh@28 -- # bdevperf_pid=76554 00:17:05.834 02:14:20 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:05.834 02:14:20 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:05.834 02:14:20 -- target/tls.sh@31 -- # waitforlisten 76554 /var/tmp/bdevperf.sock 00:17:05.834 02:14:20 -- common/autotest_common.sh@819 -- # '[' -z 76554 ']' 00:17:05.834 02:14:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:05.834 02:14:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:05.834 02:14:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:05.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:05.834 02:14:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:05.834 02:14:20 -- common/autotest_common.sh@10 -- # set +x 00:17:05.834 [2024-05-14 02:14:20.308282] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:17:05.834 [2024-05-14 02:14:20.308374] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76554 ] 00:17:06.108 [2024-05-14 02:14:20.445936] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:06.108 [2024-05-14 02:14:20.526144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:07.042 02:14:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:07.042 02:14:21 -- common/autotest_common.sh@852 -- # return 0 00:17:07.042 02:14:21 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:07.042 [2024-05-14 02:14:21.586040] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:07.301 TLSTESTn1 00:17:07.301 02:14:21 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:07.301 Running I/O for 10 seconds... 00:17:17.273 00:17:17.273 Latency(us) 00:17:17.273 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:17.273 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:17.273 Verification LBA range: start 0x0 length 0x2000 00:17:17.273 TLSTESTn1 : 10.02 5229.41 20.43 0.00 0.00 24436.31 5808.87 27167.65 00:17:17.273 =================================================================================================================== 00:17:17.273 Total : 5229.41 20.43 0.00 0.00 24436.31 5808.87 27167.65 00:17:17.273 0 00:17:17.273 02:14:31 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:17.273 02:14:31 -- target/tls.sh@45 -- # killprocess 76554 00:17:17.273 02:14:31 -- common/autotest_common.sh@926 -- # '[' -z 76554 ']' 00:17:17.273 02:14:31 -- common/autotest_common.sh@930 -- # kill -0 76554 00:17:17.273 02:14:31 -- common/autotest_common.sh@931 -- # uname 00:17:17.273 02:14:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:17.273 02:14:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 76554 00:17:17.273 killing process with pid 76554 00:17:17.273 Received shutdown signal, test time was about 10.000000 seconds 00:17:17.273 00:17:17.273 Latency(us) 00:17:17.273 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:17.273 =================================================================================================================== 00:17:17.273 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:17.273 02:14:31 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:17.273 02:14:31 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:17.273 02:14:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 76554' 00:17:17.273 02:14:31 -- common/autotest_common.sh@945 -- # kill 76554 00:17:17.273 02:14:31 -- common/autotest_common.sh@950 -- # wait 76554 00:17:17.532 02:14:32 -- target/tls.sh@179 -- # chmod 0666 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:17.532 02:14:32 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:17.532 02:14:32 -- common/autotest_common.sh@640 -- # local es=0 00:17:17.532 02:14:32 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:17.532 02:14:32 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:17:17.532 02:14:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:17.532 02:14:32 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:17:17.532 02:14:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:17.532 02:14:32 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:17.532 02:14:32 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:17.532 02:14:32 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:17.532 02:14:32 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:17.532 02:14:32 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:17:17.532 02:14:32 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:17.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:17.532 02:14:32 -- target/tls.sh@28 -- # bdevperf_pid=76708 00:17:17.532 02:14:32 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:17.532 02:14:32 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:17.532 02:14:32 -- target/tls.sh@31 -- # waitforlisten 76708 /var/tmp/bdevperf.sock 00:17:17.532 02:14:32 -- common/autotest_common.sh@819 -- # '[' -z 76708 ']' 00:17:17.532 02:14:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:17.532 02:14:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:17.532 02:14:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:17.532 02:14:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:17.532 02:14:32 -- common/autotest_common.sh@10 -- # set +x 00:17:17.532 [2024-05-14 02:14:32.097031] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:17:17.532 [2024-05-14 02:14:32.097116] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76708 ] 00:17:17.790 [2024-05-14 02:14:32.231853] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.790 [2024-05-14 02:14:32.289619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:18.724 02:14:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:18.724 02:14:33 -- common/autotest_common.sh@852 -- # return 0 00:17:18.724 02:14:33 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:18.982 [2024-05-14 02:14:33.395581] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:18.982 [2024-05-14 02:14:33.395665] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:18.982 2024/05/14 02:14:33 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-22 Msg=Could not retrieve PSK from file: /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:18.982 request: 00:17:18.982 { 00:17:18.982 "method": "bdev_nvme_attach_controller", 00:17:18.983 "params": { 00:17:18.983 "name": "TLSTEST", 00:17:18.983 "trtype": "tcp", 00:17:18.983 "traddr": "10.0.0.2", 00:17:18.983 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:18.983 "adrfam": "ipv4", 00:17:18.983 "trsvcid": "4420", 00:17:18.983 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:18.983 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:18.983 } 00:17:18.983 } 00:17:18.983 Got JSON-RPC error response 00:17:18.983 GoRPCClient: error on JSON-RPC call 00:17:18.983 02:14:33 -- target/tls.sh@36 -- # killprocess 76708 00:17:18.983 02:14:33 -- common/autotest_common.sh@926 -- # '[' -z 76708 ']' 00:17:18.983 02:14:33 -- common/autotest_common.sh@930 -- # kill -0 76708 00:17:18.983 02:14:33 -- common/autotest_common.sh@931 -- # uname 00:17:18.983 02:14:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:18.983 02:14:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 76708 00:17:18.983 killing process with pid 76708 00:17:18.983 Received shutdown signal, test time was about 10.000000 seconds 00:17:18.983 00:17:18.983 Latency(us) 00:17:18.983 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:18.983 =================================================================================================================== 00:17:18.983 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:18.983 02:14:33 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:18.983 02:14:33 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:18.983 02:14:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 76708' 00:17:18.983 02:14:33 -- common/autotest_common.sh@945 -- # kill 76708 00:17:18.983 02:14:33 -- common/autotest_common.sh@950 -- # wait 76708 00:17:19.240 02:14:33 -- target/tls.sh@37 -- # return 1 00:17:19.240 02:14:33 -- common/autotest_common.sh@643 -- # es=1 00:17:19.240 02:14:33 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:19.240 02:14:33 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:19.240 02:14:33 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:19.240 02:14:33 -- target/tls.sh@183 -- # killprocess 76449 00:17:19.240 02:14:33 -- common/autotest_common.sh@926 -- # '[' -z 76449 ']' 00:17:19.240 02:14:33 -- common/autotest_common.sh@930 -- # kill -0 76449 00:17:19.240 02:14:33 -- common/autotest_common.sh@931 -- # uname 00:17:19.240 02:14:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:19.240 02:14:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 76449 00:17:19.240 killing process with pid 76449 00:17:19.240 02:14:33 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:19.240 02:14:33 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:19.240 02:14:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 76449' 00:17:19.240 02:14:33 -- common/autotest_common.sh@945 -- # kill 76449 00:17:19.240 02:14:33 -- common/autotest_common.sh@950 -- # wait 76449 00:17:19.498 02:14:33 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:17:19.498 02:14:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:19.498 02:14:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:19.498 02:14:33 -- common/autotest_common.sh@10 -- # set +x 00:17:19.498 02:14:33 -- nvmf/common.sh@469 -- # nvmfpid=76753 00:17:19.498 02:14:33 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:19.498 02:14:33 -- nvmf/common.sh@470 -- # waitforlisten 76753 00:17:19.498 02:14:33 -- common/autotest_common.sh@819 -- # '[' -z 76753 ']' 00:17:19.498 02:14:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:19.498 02:14:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:19.498 02:14:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:19.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:19.498 02:14:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:19.498 02:14:33 -- common/autotest_common.sh@10 -- # set +x 00:17:19.498 [2024-05-14 02:14:33.933689] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:17:19.498 [2024-05-14 02:14:33.933815] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:19.498 [2024-05-14 02:14:34.072158] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:19.756 [2024-05-14 02:14:34.130726] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:19.756 [2024-05-14 02:14:34.130870] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:19.756 [2024-05-14 02:14:34.130885] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:19.756 [2024-05-14 02:14:34.130894] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:19.756 [2024-05-14 02:14:34.130919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:20.689 02:14:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:20.689 02:14:34 -- common/autotest_common.sh@852 -- # return 0 00:17:20.689 02:14:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:20.689 02:14:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:20.689 02:14:34 -- common/autotest_common.sh@10 -- # set +x 00:17:20.689 02:14:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:20.689 02:14:34 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:20.689 02:14:34 -- common/autotest_common.sh@640 -- # local es=0 00:17:20.689 02:14:34 -- common/autotest_common.sh@642 -- # valid_exec_arg setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:20.689 02:14:34 -- common/autotest_common.sh@628 -- # local arg=setup_nvmf_tgt 00:17:20.689 02:14:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:20.689 02:14:34 -- common/autotest_common.sh@632 -- # type -t setup_nvmf_tgt 00:17:20.689 02:14:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:20.689 02:14:34 -- common/autotest_common.sh@643 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:20.689 02:14:34 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:20.689 02:14:34 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:20.689 [2024-05-14 02:14:35.254142] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:20.689 02:14:35 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:20.947 02:14:35 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:21.206 [2024-05-14 02:14:35.734272] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:21.206 [2024-05-14 02:14:35.734488] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:21.206 02:14:35 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:21.464 malloc0 00:17:21.464 02:14:36 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:21.722 02:14:36 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:21.981 [2024-05-14 02:14:36.525952] tcp.c:3549:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:21.981 [2024-05-14 02:14:36.526000] tcp.c:3618:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:17:21.981 [2024-05-14 02:14:36.526021] subsystem.c: 840:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:17:21.981 2024/05/14 02:14:36 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:17:21.981 request: 00:17:21.981 { 00:17:21.981 "method": "nvmf_subsystem_add_host", 00:17:21.981 "params": { 00:17:21.981 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:21.981 "host": "nqn.2016-06.io.spdk:host1", 00:17:21.981 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:21.981 } 00:17:21.981 } 00:17:21.981 Got JSON-RPC error response 00:17:21.981 GoRPCClient: error on JSON-RPC call 00:17:21.981 02:14:36 -- common/autotest_common.sh@643 -- # es=1 00:17:21.981 02:14:36 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:21.981 02:14:36 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:21.981 02:14:36 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:21.981 02:14:36 -- target/tls.sh@189 -- # killprocess 76753 00:17:21.981 02:14:36 -- common/autotest_common.sh@926 -- # '[' -z 76753 ']' 00:17:21.981 02:14:36 -- common/autotest_common.sh@930 -- # kill -0 76753 00:17:21.981 02:14:36 -- common/autotest_common.sh@931 -- # uname 00:17:21.981 02:14:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:21.981 02:14:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 76753 00:17:22.241 killing process with pid 76753 00:17:22.241 02:14:36 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:22.241 02:14:36 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:22.241 02:14:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 76753' 00:17:22.241 02:14:36 -- common/autotest_common.sh@945 -- # kill 76753 00:17:22.241 02:14:36 -- common/autotest_common.sh@950 -- # wait 76753 00:17:22.241 02:14:36 -- target/tls.sh@190 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:22.241 02:14:36 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:17:22.241 02:14:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:22.241 02:14:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:22.241 02:14:36 -- common/autotest_common.sh@10 -- # set +x 00:17:22.241 02:14:36 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:22.241 02:14:36 -- nvmf/common.sh@469 -- # nvmfpid=76869 00:17:22.241 02:14:36 -- nvmf/common.sh@470 -- # waitforlisten 76869 00:17:22.241 02:14:36 -- common/autotest_common.sh@819 -- # '[' -z 76869 ']' 00:17:22.241 02:14:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.241 02:14:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:22.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:22.241 02:14:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.241 02:14:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:22.241 02:14:36 -- common/autotest_common.sh@10 -- # set +x 00:17:22.504 [2024-05-14 02:14:36.841006] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:17:22.504 [2024-05-14 02:14:36.841092] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:22.504 [2024-05-14 02:14:36.982037] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.504 [2024-05-14 02:14:37.052600] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:22.504 [2024-05-14 02:14:37.052779] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:22.504 [2024-05-14 02:14:37.052795] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:22.504 [2024-05-14 02:14:37.052805] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:22.504 [2024-05-14 02:14:37.052840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:23.518 02:14:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:23.518 02:14:37 -- common/autotest_common.sh@852 -- # return 0 00:17:23.518 02:14:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:23.518 02:14:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:23.518 02:14:37 -- common/autotest_common.sh@10 -- # set +x 00:17:23.518 02:14:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:23.518 02:14:37 -- target/tls.sh@194 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:23.518 02:14:37 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:23.518 02:14:37 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:23.790 [2024-05-14 02:14:38.100070] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:23.790 02:14:38 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:24.048 02:14:38 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:24.306 [2024-05-14 02:14:38.644190] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:24.306 [2024-05-14 02:14:38.644401] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:24.306 02:14:38 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:24.306 malloc0 00:17:24.565 02:14:38 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:24.824 02:14:39 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:25.082 02:14:39 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:25.082 02:14:39 -- target/tls.sh@197 -- # bdevperf_pid=76972 00:17:25.082 02:14:39 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:25.082 02:14:39 -- target/tls.sh@200 -- # waitforlisten 76972 /var/tmp/bdevperf.sock 00:17:25.082 02:14:39 -- common/autotest_common.sh@819 -- # '[' -z 76972 ']' 00:17:25.082 02:14:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:25.082 02:14:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:25.082 02:14:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:25.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:25.082 02:14:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:25.082 02:14:39 -- common/autotest_common.sh@10 -- # set +x 00:17:25.082 [2024-05-14 02:14:39.471675] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:17:25.082 [2024-05-14 02:14:39.471759] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76972 ] 00:17:25.082 [2024-05-14 02:14:39.605307] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.341 [2024-05-14 02:14:39.672306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:25.908 02:14:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:25.908 02:14:40 -- common/autotest_common.sh@852 -- # return 0 00:17:25.908 02:14:40 -- target/tls.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:26.167 [2024-05-14 02:14:40.712519] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:26.426 TLSTESTn1 00:17:26.426 02:14:40 -- target/tls.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:17:26.684 02:14:41 -- target/tls.sh@205 -- # tgtconf='{ 00:17:26.684 "subsystems": [ 00:17:26.684 { 00:17:26.684 "subsystem": "iobuf", 00:17:26.684 "config": [ 00:17:26.684 { 00:17:26.684 "method": "iobuf_set_options", 00:17:26.684 "params": { 00:17:26.684 "large_bufsize": 135168, 00:17:26.684 "large_pool_count": 1024, 00:17:26.684 "small_bufsize": 8192, 00:17:26.684 "small_pool_count": 8192 00:17:26.684 } 00:17:26.684 } 00:17:26.684 ] 00:17:26.684 }, 00:17:26.684 { 00:17:26.684 "subsystem": "sock", 00:17:26.684 "config": [ 00:17:26.684 { 00:17:26.684 "method": "sock_impl_set_options", 00:17:26.684 "params": { 00:17:26.684 "enable_ktls": false, 00:17:26.684 "enable_placement_id": 0, 00:17:26.684 "enable_quickack": false, 00:17:26.684 "enable_recv_pipe": true, 00:17:26.684 "enable_zerocopy_send_client": false, 00:17:26.684 "enable_zerocopy_send_server": true, 00:17:26.684 "impl_name": "posix", 00:17:26.684 "recv_buf_size": 2097152, 00:17:26.684 "send_buf_size": 2097152, 00:17:26.684 "tls_version": 0, 00:17:26.684 "zerocopy_threshold": 0 00:17:26.684 } 00:17:26.684 }, 00:17:26.684 { 00:17:26.684 "method": "sock_impl_set_options", 00:17:26.684 "params": { 00:17:26.684 "enable_ktls": false, 00:17:26.684 "enable_placement_id": 0, 00:17:26.684 "enable_quickack": false, 00:17:26.684 "enable_recv_pipe": true, 00:17:26.684 "enable_zerocopy_send_client": false, 00:17:26.684 "enable_zerocopy_send_server": true, 00:17:26.684 "impl_name": "ssl", 00:17:26.684 "recv_buf_size": 4096, 00:17:26.684 "send_buf_size": 4096, 00:17:26.684 "tls_version": 0, 00:17:26.684 "zerocopy_threshold": 0 00:17:26.684 } 00:17:26.684 } 00:17:26.684 ] 00:17:26.684 }, 00:17:26.684 { 00:17:26.684 "subsystem": "vmd", 00:17:26.684 "config": [] 00:17:26.684 }, 00:17:26.684 { 00:17:26.684 "subsystem": "accel", 00:17:26.684 "config": [ 00:17:26.684 { 00:17:26.684 "method": "accel_set_options", 00:17:26.684 "params": { 00:17:26.684 "buf_count": 2048, 00:17:26.684 "large_cache_size": 16, 00:17:26.684 "sequence_count": 2048, 00:17:26.684 "small_cache_size": 128, 00:17:26.684 "task_count": 2048 00:17:26.684 } 00:17:26.684 } 00:17:26.684 ] 00:17:26.684 }, 00:17:26.684 { 00:17:26.684 "subsystem": "bdev", 00:17:26.684 "config": [ 00:17:26.684 { 00:17:26.684 "method": "bdev_set_options", 00:17:26.684 "params": { 00:17:26.684 "bdev_auto_examine": true, 00:17:26.684 "bdev_io_cache_size": 256, 00:17:26.684 "bdev_io_pool_size": 65535, 00:17:26.684 "iobuf_large_cache_size": 16, 00:17:26.684 "iobuf_small_cache_size": 128 00:17:26.684 } 00:17:26.684 }, 00:17:26.684 { 00:17:26.684 "method": "bdev_raid_set_options", 00:17:26.684 "params": { 00:17:26.684 "process_window_size_kb": 1024 00:17:26.684 } 00:17:26.684 }, 00:17:26.684 { 00:17:26.684 "method": "bdev_iscsi_set_options", 00:17:26.684 "params": { 00:17:26.684 "timeout_sec": 30 00:17:26.684 } 00:17:26.684 }, 00:17:26.684 { 00:17:26.684 "method": "bdev_nvme_set_options", 00:17:26.684 "params": { 00:17:26.684 "action_on_timeout": "none", 00:17:26.684 "allow_accel_sequence": false, 00:17:26.684 "arbitration_burst": 0, 00:17:26.684 "bdev_retry_count": 3, 00:17:26.684 "ctrlr_loss_timeout_sec": 0, 00:17:26.684 "delay_cmd_submit": true, 00:17:26.684 "fast_io_fail_timeout_sec": 0, 00:17:26.684 "generate_uuids": false, 00:17:26.684 "high_priority_weight": 0, 00:17:26.684 "io_path_stat": false, 00:17:26.684 "io_queue_requests": 0, 00:17:26.684 "keep_alive_timeout_ms": 10000, 00:17:26.684 "low_priority_weight": 0, 00:17:26.684 "medium_priority_weight": 0, 00:17:26.684 "nvme_adminq_poll_period_us": 10000, 00:17:26.684 "nvme_ioq_poll_period_us": 0, 00:17:26.684 "reconnect_delay_sec": 0, 00:17:26.684 "timeout_admin_us": 0, 00:17:26.684 "timeout_us": 0, 00:17:26.684 "transport_ack_timeout": 0, 00:17:26.684 "transport_retry_count": 4, 00:17:26.684 "transport_tos": 0 00:17:26.684 } 00:17:26.685 }, 00:17:26.685 { 00:17:26.685 "method": "bdev_nvme_set_hotplug", 00:17:26.685 "params": { 00:17:26.685 "enable": false, 00:17:26.685 "period_us": 100000 00:17:26.685 } 00:17:26.685 }, 00:17:26.685 { 00:17:26.685 "method": "bdev_malloc_create", 00:17:26.685 "params": { 00:17:26.685 "block_size": 4096, 00:17:26.685 "name": "malloc0", 00:17:26.685 "num_blocks": 8192, 00:17:26.685 "optimal_io_boundary": 0, 00:17:26.685 "physical_block_size": 4096, 00:17:26.685 "uuid": "1272bea9-26f1-4a3a-9a9c-ceeda635f5fa" 00:17:26.685 } 00:17:26.685 }, 00:17:26.685 { 00:17:26.685 "method": "bdev_wait_for_examine" 00:17:26.685 } 00:17:26.685 ] 00:17:26.685 }, 00:17:26.685 { 00:17:26.685 "subsystem": "nbd", 00:17:26.685 "config": [] 00:17:26.685 }, 00:17:26.685 { 00:17:26.685 "subsystem": "scheduler", 00:17:26.685 "config": [ 00:17:26.685 { 00:17:26.685 "method": "framework_set_scheduler", 00:17:26.685 "params": { 00:17:26.685 "name": "static" 00:17:26.685 } 00:17:26.685 } 00:17:26.685 ] 00:17:26.685 }, 00:17:26.685 { 00:17:26.685 "subsystem": "nvmf", 00:17:26.685 "config": [ 00:17:26.685 { 00:17:26.685 "method": "nvmf_set_config", 00:17:26.685 "params": { 00:17:26.685 "admin_cmd_passthru": { 00:17:26.685 "identify_ctrlr": false 00:17:26.685 }, 00:17:26.685 "discovery_filter": "match_any" 00:17:26.685 } 00:17:26.685 }, 00:17:26.685 { 00:17:26.685 "method": "nvmf_set_max_subsystems", 00:17:26.685 "params": { 00:17:26.685 "max_subsystems": 1024 00:17:26.685 } 00:17:26.685 }, 00:17:26.685 { 00:17:26.685 "method": "nvmf_set_crdt", 00:17:26.685 "params": { 00:17:26.685 "crdt1": 0, 00:17:26.685 "crdt2": 0, 00:17:26.685 "crdt3": 0 00:17:26.685 } 00:17:26.685 }, 00:17:26.685 { 00:17:26.685 "method": "nvmf_create_transport", 00:17:26.685 "params": { 00:17:26.685 "abort_timeout_sec": 1, 00:17:26.685 "buf_cache_size": 4294967295, 00:17:26.685 "c2h_success": false, 00:17:26.685 "dif_insert_or_strip": false, 00:17:26.685 "in_capsule_data_size": 4096, 00:17:26.685 "io_unit_size": 131072, 00:17:26.685 "max_aq_depth": 128, 00:17:26.685 "max_io_qpairs_per_ctrlr": 127, 00:17:26.685 "max_io_size": 131072, 00:17:26.685 "max_queue_depth": 128, 00:17:26.685 "num_shared_buffers": 511, 00:17:26.685 "sock_priority": 0, 00:17:26.685 "trtype": "TCP", 00:17:26.685 "zcopy": false 00:17:26.685 } 00:17:26.685 }, 00:17:26.685 { 00:17:26.685 "method": "nvmf_create_subsystem", 00:17:26.685 "params": { 00:17:26.685 "allow_any_host": false, 00:17:26.685 "ana_reporting": false, 00:17:26.685 "max_cntlid": 65519, 00:17:26.685 "max_namespaces": 10, 00:17:26.685 "min_cntlid": 1, 00:17:26.685 "model_number": "SPDK bdev Controller", 00:17:26.685 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:26.685 "serial_number": "SPDK00000000000001" 00:17:26.685 } 00:17:26.685 }, 00:17:26.685 { 00:17:26.685 "method": "nvmf_subsystem_add_host", 00:17:26.685 "params": { 00:17:26.685 "host": "nqn.2016-06.io.spdk:host1", 00:17:26.685 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:26.685 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:26.685 } 00:17:26.685 }, 00:17:26.685 { 00:17:26.685 "method": "nvmf_subsystem_add_ns", 00:17:26.685 "params": { 00:17:26.685 "namespace": { 00:17:26.685 "bdev_name": "malloc0", 00:17:26.685 "nguid": "1272BEA926F14A3A9A9CCEEDA635F5FA", 00:17:26.685 "nsid": 1, 00:17:26.685 "uuid": "1272bea9-26f1-4a3a-9a9c-ceeda635f5fa" 00:17:26.685 }, 00:17:26.685 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:17:26.685 } 00:17:26.685 }, 00:17:26.685 { 00:17:26.685 "method": "nvmf_subsystem_add_listener", 00:17:26.685 "params": { 00:17:26.685 "listen_address": { 00:17:26.685 "adrfam": "IPv4", 00:17:26.685 "traddr": "10.0.0.2", 00:17:26.685 "trsvcid": "4420", 00:17:26.685 "trtype": "TCP" 00:17:26.685 }, 00:17:26.685 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:26.685 "secure_channel": true 00:17:26.685 } 00:17:26.685 } 00:17:26.685 ] 00:17:26.685 } 00:17:26.685 ] 00:17:26.685 }' 00:17:26.685 02:14:41 -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:26.943 02:14:41 -- target/tls.sh@206 -- # bdevperfconf='{ 00:17:26.943 "subsystems": [ 00:17:26.943 { 00:17:26.943 "subsystem": "iobuf", 00:17:26.943 "config": [ 00:17:26.943 { 00:17:26.943 "method": "iobuf_set_options", 00:17:26.943 "params": { 00:17:26.943 "large_bufsize": 135168, 00:17:26.943 "large_pool_count": 1024, 00:17:26.943 "small_bufsize": 8192, 00:17:26.943 "small_pool_count": 8192 00:17:26.943 } 00:17:26.943 } 00:17:26.943 ] 00:17:26.943 }, 00:17:26.943 { 00:17:26.943 "subsystem": "sock", 00:17:26.943 "config": [ 00:17:26.943 { 00:17:26.943 "method": "sock_impl_set_options", 00:17:26.943 "params": { 00:17:26.943 "enable_ktls": false, 00:17:26.943 "enable_placement_id": 0, 00:17:26.943 "enable_quickack": false, 00:17:26.943 "enable_recv_pipe": true, 00:17:26.943 "enable_zerocopy_send_client": false, 00:17:26.943 "enable_zerocopy_send_server": true, 00:17:26.943 "impl_name": "posix", 00:17:26.943 "recv_buf_size": 2097152, 00:17:26.943 "send_buf_size": 2097152, 00:17:26.943 "tls_version": 0, 00:17:26.943 "zerocopy_threshold": 0 00:17:26.943 } 00:17:26.943 }, 00:17:26.943 { 00:17:26.943 "method": "sock_impl_set_options", 00:17:26.943 "params": { 00:17:26.943 "enable_ktls": false, 00:17:26.943 "enable_placement_id": 0, 00:17:26.943 "enable_quickack": false, 00:17:26.943 "enable_recv_pipe": true, 00:17:26.943 "enable_zerocopy_send_client": false, 00:17:26.943 "enable_zerocopy_send_server": true, 00:17:26.943 "impl_name": "ssl", 00:17:26.943 "recv_buf_size": 4096, 00:17:26.943 "send_buf_size": 4096, 00:17:26.943 "tls_version": 0, 00:17:26.943 "zerocopy_threshold": 0 00:17:26.943 } 00:17:26.943 } 00:17:26.943 ] 00:17:26.943 }, 00:17:26.943 { 00:17:26.943 "subsystem": "vmd", 00:17:26.943 "config": [] 00:17:26.943 }, 00:17:26.943 { 00:17:26.943 "subsystem": "accel", 00:17:26.943 "config": [ 00:17:26.943 { 00:17:26.943 "method": "accel_set_options", 00:17:26.943 "params": { 00:17:26.943 "buf_count": 2048, 00:17:26.943 "large_cache_size": 16, 00:17:26.943 "sequence_count": 2048, 00:17:26.943 "small_cache_size": 128, 00:17:26.943 "task_count": 2048 00:17:26.943 } 00:17:26.943 } 00:17:26.943 ] 00:17:26.943 }, 00:17:26.943 { 00:17:26.943 "subsystem": "bdev", 00:17:26.943 "config": [ 00:17:26.943 { 00:17:26.943 "method": "bdev_set_options", 00:17:26.943 "params": { 00:17:26.943 "bdev_auto_examine": true, 00:17:26.943 "bdev_io_cache_size": 256, 00:17:26.943 "bdev_io_pool_size": 65535, 00:17:26.943 "iobuf_large_cache_size": 16, 00:17:26.943 "iobuf_small_cache_size": 128 00:17:26.944 } 00:17:26.944 }, 00:17:26.944 { 00:17:26.944 "method": "bdev_raid_set_options", 00:17:26.944 "params": { 00:17:26.944 "process_window_size_kb": 1024 00:17:26.944 } 00:17:26.944 }, 00:17:26.944 { 00:17:26.944 "method": "bdev_iscsi_set_options", 00:17:26.944 "params": { 00:17:26.944 "timeout_sec": 30 00:17:26.944 } 00:17:26.944 }, 00:17:26.944 { 00:17:26.944 "method": "bdev_nvme_set_options", 00:17:26.944 "params": { 00:17:26.944 "action_on_timeout": "none", 00:17:26.944 "allow_accel_sequence": false, 00:17:26.944 "arbitration_burst": 0, 00:17:26.944 "bdev_retry_count": 3, 00:17:26.944 "ctrlr_loss_timeout_sec": 0, 00:17:26.944 "delay_cmd_submit": true, 00:17:26.944 "fast_io_fail_timeout_sec": 0, 00:17:26.944 "generate_uuids": false, 00:17:26.944 "high_priority_weight": 0, 00:17:26.944 "io_path_stat": false, 00:17:26.944 "io_queue_requests": 512, 00:17:26.944 "keep_alive_timeout_ms": 10000, 00:17:26.944 "low_priority_weight": 0, 00:17:26.944 "medium_priority_weight": 0, 00:17:26.944 "nvme_adminq_poll_period_us": 10000, 00:17:26.944 "nvme_ioq_poll_period_us": 0, 00:17:26.944 "reconnect_delay_sec": 0, 00:17:26.944 "timeout_admin_us": 0, 00:17:26.944 "timeout_us": 0, 00:17:26.944 "transport_ack_timeout": 0, 00:17:26.944 "transport_retry_count": 4, 00:17:26.944 "transport_tos": 0 00:17:26.944 } 00:17:26.944 }, 00:17:26.944 { 00:17:26.944 "method": "bdev_nvme_attach_controller", 00:17:26.944 "params": { 00:17:26.944 "adrfam": "IPv4", 00:17:26.944 "ctrlr_loss_timeout_sec": 0, 00:17:26.944 "ddgst": false, 00:17:26.944 "fast_io_fail_timeout_sec": 0, 00:17:26.944 "hdgst": false, 00:17:26.944 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:26.944 "name": "TLSTEST", 00:17:26.944 "prchk_guard": false, 00:17:26.944 "prchk_reftag": false, 00:17:26.944 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:17:26.944 "reconnect_delay_sec": 0, 00:17:26.944 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:26.944 "traddr": "10.0.0.2", 00:17:26.944 "trsvcid": "4420", 00:17:26.944 "trtype": "TCP" 00:17:26.944 } 00:17:26.944 }, 00:17:26.944 { 00:17:26.944 "method": "bdev_nvme_set_hotplug", 00:17:26.944 "params": { 00:17:26.944 "enable": false, 00:17:26.944 "period_us": 100000 00:17:26.944 } 00:17:26.944 }, 00:17:26.944 { 00:17:26.944 "method": "bdev_wait_for_examine" 00:17:26.944 } 00:17:26.944 ] 00:17:26.944 }, 00:17:26.944 { 00:17:26.944 "subsystem": "nbd", 00:17:26.944 "config": [] 00:17:26.944 } 00:17:26.944 ] 00:17:26.944 }' 00:17:26.944 02:14:41 -- target/tls.sh@208 -- # killprocess 76972 00:17:26.944 02:14:41 -- common/autotest_common.sh@926 -- # '[' -z 76972 ']' 00:17:26.944 02:14:41 -- common/autotest_common.sh@930 -- # kill -0 76972 00:17:26.944 02:14:41 -- common/autotest_common.sh@931 -- # uname 00:17:26.944 02:14:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:26.944 02:14:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 76972 00:17:26.944 02:14:41 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:26.944 02:14:41 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:26.944 killing process with pid 76972 00:17:26.944 02:14:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 76972' 00:17:26.944 Received shutdown signal, test time was about 10.000000 seconds 00:17:26.944 00:17:26.944 Latency(us) 00:17:26.944 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:26.944 =================================================================================================================== 00:17:26.944 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:26.944 02:14:41 -- common/autotest_common.sh@945 -- # kill 76972 00:17:26.944 02:14:41 -- common/autotest_common.sh@950 -- # wait 76972 00:17:27.201 02:14:41 -- target/tls.sh@209 -- # killprocess 76869 00:17:27.201 02:14:41 -- common/autotest_common.sh@926 -- # '[' -z 76869 ']' 00:17:27.201 02:14:41 -- common/autotest_common.sh@930 -- # kill -0 76869 00:17:27.201 02:14:41 -- common/autotest_common.sh@931 -- # uname 00:17:27.201 02:14:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:27.201 02:14:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 76869 00:17:27.201 02:14:41 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:27.201 02:14:41 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:27.201 killing process with pid 76869 00:17:27.201 02:14:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 76869' 00:17:27.201 02:14:41 -- common/autotest_common.sh@945 -- # kill 76869 00:17:27.201 02:14:41 -- common/autotest_common.sh@950 -- # wait 76869 00:17:27.459 02:14:41 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:17:27.459 02:14:41 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:27.459 02:14:41 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:27.459 02:14:41 -- common/autotest_common.sh@10 -- # set +x 00:17:27.459 02:14:41 -- target/tls.sh@212 -- # echo '{ 00:17:27.459 "subsystems": [ 00:17:27.459 { 00:17:27.459 "subsystem": "iobuf", 00:17:27.459 "config": [ 00:17:27.459 { 00:17:27.459 "method": "iobuf_set_options", 00:17:27.459 "params": { 00:17:27.459 "large_bufsize": 135168, 00:17:27.459 "large_pool_count": 1024, 00:17:27.459 "small_bufsize": 8192, 00:17:27.459 "small_pool_count": 8192 00:17:27.459 } 00:17:27.459 } 00:17:27.459 ] 00:17:27.459 }, 00:17:27.459 { 00:17:27.459 "subsystem": "sock", 00:17:27.459 "config": [ 00:17:27.459 { 00:17:27.459 "method": "sock_impl_set_options", 00:17:27.459 "params": { 00:17:27.459 "enable_ktls": false, 00:17:27.459 "enable_placement_id": 0, 00:17:27.459 "enable_quickack": false, 00:17:27.459 "enable_recv_pipe": true, 00:17:27.459 "enable_zerocopy_send_client": false, 00:17:27.459 "enable_zerocopy_send_server": true, 00:17:27.459 "impl_name": "posix", 00:17:27.459 "recv_buf_size": 2097152, 00:17:27.459 "send_buf_size": 2097152, 00:17:27.459 "tls_version": 0, 00:17:27.459 "zerocopy_threshold": 0 00:17:27.459 } 00:17:27.459 }, 00:17:27.459 { 00:17:27.459 "method": "sock_impl_set_options", 00:17:27.459 "params": { 00:17:27.459 "enable_ktls": false, 00:17:27.459 "enable_placement_id": 0, 00:17:27.459 "enable_quickack": false, 00:17:27.459 "enable_recv_pipe": true, 00:17:27.459 "enable_zerocopy_send_client": false, 00:17:27.459 "enable_zerocopy_send_server": true, 00:17:27.459 "impl_name": "ssl", 00:17:27.459 "recv_buf_size": 4096, 00:17:27.459 "send_buf_size": 4096, 00:17:27.459 "tls_version": 0, 00:17:27.459 "zerocopy_threshold": 0 00:17:27.459 } 00:17:27.459 } 00:17:27.459 ] 00:17:27.459 }, 00:17:27.459 { 00:17:27.459 "subsystem": "vmd", 00:17:27.459 "config": [] 00:17:27.459 }, 00:17:27.459 { 00:17:27.459 "subsystem": "accel", 00:17:27.459 "config": [ 00:17:27.459 { 00:17:27.459 "method": "accel_set_options", 00:17:27.459 "params": { 00:17:27.459 "buf_count": 2048, 00:17:27.459 "large_cache_size": 16, 00:17:27.459 "sequence_count": 2048, 00:17:27.459 "small_cache_size": 128, 00:17:27.459 "task_count": 2048 00:17:27.459 } 00:17:27.459 } 00:17:27.459 ] 00:17:27.459 }, 00:17:27.459 { 00:17:27.459 "subsystem": "bdev", 00:17:27.459 "config": [ 00:17:27.459 { 00:17:27.459 "method": "bdev_set_options", 00:17:27.459 "params": { 00:17:27.459 "bdev_auto_examine": true, 00:17:27.459 "bdev_io_cache_size": 256, 00:17:27.459 "bdev_io_pool_size": 65535, 00:17:27.459 "iobuf_large_cache_size": 16, 00:17:27.459 "iobuf_small_cache_size": 128 00:17:27.459 } 00:17:27.459 }, 00:17:27.459 { 00:17:27.459 "method": "bdev_raid_set_options", 00:17:27.459 "params": { 00:17:27.459 "process_window_size_kb": 1024 00:17:27.459 } 00:17:27.459 }, 00:17:27.459 { 00:17:27.459 "method": "bdev_iscsi_set_options", 00:17:27.459 "params": { 00:17:27.459 "timeout_sec": 30 00:17:27.459 } 00:17:27.459 }, 00:17:27.459 { 00:17:27.459 "method": "bdev_nvme_set_options", 00:17:27.459 "params": { 00:17:27.459 "action_on_timeout": "none", 00:17:27.459 "allow_accel_sequence": false, 00:17:27.459 "arbitration_burst": 0, 00:17:27.459 "bdev_retry_count": 3, 00:17:27.459 "ctrlr_loss_timeout_sec": 0, 00:17:27.459 "delay_cmd_submit": true, 00:17:27.459 "fast_io_fail_timeout_sec": 0, 00:17:27.459 "generate_uuids": false, 00:17:27.459 "high_priority_weight": 0, 00:17:27.459 "io_path_stat": false, 00:17:27.459 "io_queue_requests": 0, 00:17:27.459 "keep_alive_timeout_ms": 10000, 00:17:27.459 "low_priority_weight": 0, 00:17:27.459 "medium_priority_weight": 0, 00:17:27.459 "nvme_adminq_poll_period_us": 10000, 00:17:27.459 "nvme_ioq_poll_period_us": 0, 00:17:27.459 "reconnect_delay_sec": 0, 00:17:27.459 "timeout_admin_us": 0, 00:17:27.459 "timeout_us": 0, 00:17:27.459 "transport_ack_timeout": 0, 00:17:27.459 "transport_retry_count": 4, 00:17:27.459 "transport_tos": 0 00:17:27.459 } 00:17:27.459 }, 00:17:27.459 { 00:17:27.459 "method": "bdev_nvme_set_hotplug", 00:17:27.459 "params": { 00:17:27.459 "enable": false, 00:17:27.459 "period_us": 100000 00:17:27.459 } 00:17:27.459 }, 00:17:27.459 { 00:17:27.459 "method": "bdev_malloc_create", 00:17:27.459 "params": { 00:17:27.459 "block_size": 4096, 00:17:27.459 "name": "malloc0", 00:17:27.459 "num_blocks": 8192, 00:17:27.459 "optimal_io_boundary": 0, 00:17:27.459 "physical_block_size": 4096, 00:17:27.459 "uuid": "1272bea9-26f1-4a3a-9a9c-ceeda635f5fa" 00:17:27.459 } 00:17:27.459 }, 00:17:27.459 { 00:17:27.459 "method": "bdev_wait_for_examine" 00:17:27.459 } 00:17:27.459 ] 00:17:27.459 }, 00:17:27.459 { 00:17:27.459 "subsystem": "nbd", 00:17:27.459 "config": [] 00:17:27.460 }, 00:17:27.460 { 00:17:27.460 "subsystem": "scheduler", 00:17:27.460 "config": [ 00:17:27.460 { 00:17:27.460 "method": "framework_set_scheduler", 00:17:27.460 "params": { 00:17:27.460 "name": "static" 00:17:27.460 } 00:17:27.460 } 00:17:27.460 ] 00:17:27.460 }, 00:17:27.460 { 00:17:27.460 "subsystem": "nvmf", 00:17:27.460 "config": [ 00:17:27.460 { 00:17:27.460 "method": "nvmf_set_config", 00:17:27.460 "params": { 00:17:27.460 "admin_cmd_passthru": { 00:17:27.460 "identify_ctrlr": false 00:17:27.460 }, 00:17:27.460 "discovery_filter": "match_any" 00:17:27.460 } 00:17:27.460 }, 00:17:27.460 { 00:17:27.460 "method": "nvmf_set_max_subsystems", 00:17:27.460 "params": { 00:17:27.460 "max_subsystems": 1024 00:17:27.460 } 00:17:27.460 }, 00:17:27.460 { 00:17:27.460 "method": "nvmf_set_crdt", 00:17:27.460 "params": { 00:17:27.460 "crdt1": 0, 00:17:27.460 "crdt2": 0, 00:17:27.460 "crdt3": 0 00:17:27.460 } 00:17:27.460 }, 00:17:27.460 { 00:17:27.460 "method": "nvmf_create_transport", 00:17:27.460 "params": { 00:17:27.460 "abort_timeout_sec": 1, 00:17:27.460 "buf_cache_size": 4294967295, 00:17:27.460 "c2h_success": false, 00:17:27.460 "dif_insert_or_strip": false, 00:17:27.460 "in_capsule_data_size": 4096, 00:17:27.460 "io_unit_size": 131072, 00:17:27.460 "max_aq_depth": 128, 00:17:27.460 "max_io_qpairs_per_ctrlr": 127, 00:17:27.460 "max_io_size": 131072, 00:17:27.460 "max_queue_depth": 128, 00:17:27.460 "num_shared_buffers": 511, 00:17:27.460 "sock_priority": 0, 00:17:27.460 "trtype": "TCP", 00:17:27.460 "zcopy": false 00:17:27.460 } 00:17:27.460 }, 00:17:27.460 { 00:17:27.460 "method": "nvmf_create_subsystem", 00:17:27.460 "params": { 00:17:27.460 "allow_any_host": false, 00:17:27.460 "ana_reporting": false, 00:17:27.460 "max_cntlid": 65519, 00:17:27.460 "max_namespaces": 10, 00:17:27.460 "min_cntlid": 1, 00:17:27.460 "model_number": "SPDK bdev Controller", 00:17:27.460 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:27.460 "serial_number": "SPDK00000000000001" 00:17:27.460 } 00:17:27.460 }, 00:17:27.460 { 00:17:27.460 "method": "nvmf_subsystem_add_host", 00:17:27.460 "params": { 00:17:27.460 "host": "nqn.2016-06.io.spdk:host1", 00:17:27.460 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:27.460 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:27.460 } 00:17:27.460 }, 00:17:27.460 { 00:17:27.460 "method": "nvmf_subsystem_add_ns", 00:17:27.460 "params": { 00:17:27.460 "namespace": { 00:17:27.460 "bdev_name": "malloc0", 00:17:27.460 "nguid": "1272BEA926F14A3A9A9CCEEDA635F5FA", 00:17:27.460 "nsid": 1, 00:17:27.460 "uuid": "1272bea9-26f1-4a3a-9a9c-ceeda635f5fa" 00:17:27.460 }, 00:17:27.460 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:17:27.460 } 00:17:27.460 }, 00:17:27.460 { 00:17:27.460 "method": "nvmf_subsystem_add_listener", 00:17:27.460 "params": { 00:17:27.460 "listen_address": { 00:17:27.460 "adrfam": "IPv4", 00:17:27.460 "traddr": "10.0.0.2", 00:17:27.460 "trsvcid": "4420", 00:17:27.460 "trtype": "TCP" 00:17:27.460 }, 00:17:27.460 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:27.460 "secure_channel": true 00:17:27.460 } 00:17:27.460 } 00:17:27.460 ] 00:17:27.460 } 00:17:27.460 ] 00:17:27.460 }' 00:17:27.460 02:14:41 -- nvmf/common.sh@469 -- # nvmfpid=77045 00:17:27.460 02:14:41 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:17:27.460 02:14:41 -- nvmf/common.sh@470 -- # waitforlisten 77045 00:17:27.460 02:14:41 -- common/autotest_common.sh@819 -- # '[' -z 77045 ']' 00:17:27.460 02:14:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:27.460 02:14:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:27.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:27.460 02:14:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:27.460 02:14:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:27.460 02:14:41 -- common/autotest_common.sh@10 -- # set +x 00:17:27.460 [2024-05-14 02:14:41.947211] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:17:27.460 [2024-05-14 02:14:41.947327] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:27.718 [2024-05-14 02:14:42.078955] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.718 [2024-05-14 02:14:42.162040] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:27.718 [2024-05-14 02:14:42.162242] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:27.718 [2024-05-14 02:14:42.162270] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:27.718 [2024-05-14 02:14:42.162287] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:27.718 [2024-05-14 02:14:42.162332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:27.976 [2024-05-14 02:14:42.336516] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:27.976 [2024-05-14 02:14:42.368464] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:27.976 [2024-05-14 02:14:42.368671] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:28.544 02:14:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:28.544 02:14:42 -- common/autotest_common.sh@852 -- # return 0 00:17:28.544 02:14:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:28.544 02:14:42 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:28.544 02:14:42 -- common/autotest_common.sh@10 -- # set +x 00:17:28.544 02:14:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:28.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:28.544 02:14:42 -- target/tls.sh@216 -- # bdevperf_pid=77089 00:17:28.544 02:14:42 -- target/tls.sh@217 -- # waitforlisten 77089 /var/tmp/bdevperf.sock 00:17:28.544 02:14:42 -- common/autotest_common.sh@819 -- # '[' -z 77089 ']' 00:17:28.544 02:14:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:28.544 02:14:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:28.544 02:14:42 -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:17:28.544 02:14:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:28.544 02:14:42 -- target/tls.sh@213 -- # echo '{ 00:17:28.544 "subsystems": [ 00:17:28.544 { 00:17:28.544 "subsystem": "iobuf", 00:17:28.544 "config": [ 00:17:28.544 { 00:17:28.544 "method": "iobuf_set_options", 00:17:28.544 "params": { 00:17:28.544 "large_bufsize": 135168, 00:17:28.544 "large_pool_count": 1024, 00:17:28.544 "small_bufsize": 8192, 00:17:28.544 "small_pool_count": 8192 00:17:28.544 } 00:17:28.544 } 00:17:28.544 ] 00:17:28.544 }, 00:17:28.544 { 00:17:28.544 "subsystem": "sock", 00:17:28.544 "config": [ 00:17:28.544 { 00:17:28.544 "method": "sock_impl_set_options", 00:17:28.544 "params": { 00:17:28.544 "enable_ktls": false, 00:17:28.544 "enable_placement_id": 0, 00:17:28.544 "enable_quickack": false, 00:17:28.544 "enable_recv_pipe": true, 00:17:28.544 "enable_zerocopy_send_client": false, 00:17:28.544 "enable_zerocopy_send_server": true, 00:17:28.544 "impl_name": "posix", 00:17:28.544 "recv_buf_size": 2097152, 00:17:28.544 "send_buf_size": 2097152, 00:17:28.544 "tls_version": 0, 00:17:28.544 "zerocopy_threshold": 0 00:17:28.544 } 00:17:28.544 }, 00:17:28.544 { 00:17:28.544 "method": "sock_impl_set_options", 00:17:28.544 "params": { 00:17:28.544 "enable_ktls": false, 00:17:28.544 "enable_placement_id": 0, 00:17:28.544 "enable_quickack": false, 00:17:28.544 "enable_recv_pipe": true, 00:17:28.544 "enable_zerocopy_send_client": false, 00:17:28.544 "enable_zerocopy_send_server": true, 00:17:28.544 "impl_name": "ssl", 00:17:28.544 "recv_buf_size": 4096, 00:17:28.544 "send_buf_size": 4096, 00:17:28.544 "tls_version": 0, 00:17:28.544 "zerocopy_threshold": 0 00:17:28.544 } 00:17:28.544 } 00:17:28.544 ] 00:17:28.544 }, 00:17:28.544 { 00:17:28.544 "subsystem": "vmd", 00:17:28.544 "config": [] 00:17:28.544 }, 00:17:28.544 { 00:17:28.544 "subsystem": "accel", 00:17:28.544 "config": [ 00:17:28.544 { 00:17:28.544 "method": "accel_set_options", 00:17:28.544 "params": { 00:17:28.544 "buf_count": 2048, 00:17:28.544 "large_cache_size": 16, 00:17:28.544 "sequence_count": 2048, 00:17:28.544 "small_cache_size": 128, 00:17:28.544 "task_count": 2048 00:17:28.544 } 00:17:28.544 } 00:17:28.544 ] 00:17:28.544 }, 00:17:28.544 { 00:17:28.544 "subsystem": "bdev", 00:17:28.544 "config": [ 00:17:28.544 { 00:17:28.544 "method": "bdev_set_options", 00:17:28.544 "params": { 00:17:28.544 "bdev_auto_examine": true, 00:17:28.544 "bdev_io_cache_size": 256, 00:17:28.544 "bdev_io_pool_size": 65535, 00:17:28.544 "iobuf_large_cache_size": 16, 00:17:28.544 "iobuf_small_cache_size": 128 00:17:28.544 } 00:17:28.544 }, 00:17:28.544 { 00:17:28.544 "method": "bdev_raid_set_options", 00:17:28.544 "params": { 00:17:28.544 "process_window_size_kb": 1024 00:17:28.544 } 00:17:28.544 }, 00:17:28.544 { 00:17:28.544 "method": "bdev_iscsi_set_options", 00:17:28.544 "params": { 00:17:28.544 "timeout_sec": 30 00:17:28.544 } 00:17:28.544 }, 00:17:28.544 { 00:17:28.544 "method": "bdev_nvme_set_options", 00:17:28.544 "params": { 00:17:28.544 "action_on_timeout": "none", 00:17:28.544 "allow_accel_sequence": false, 00:17:28.544 "arbitration_burst": 0, 00:17:28.544 "bdev_retry_count": 3, 00:17:28.544 "ctrlr_loss_timeout_sec": 0, 00:17:28.544 "delay_cmd_submit": true, 00:17:28.544 "fast_io_fail_timeout_sec": 0, 00:17:28.544 "generate_uuids": false, 00:17:28.544 "high_priority_weight": 0, 00:17:28.544 "io_path_stat": false, 00:17:28.544 "io_queue_requests": 512, 00:17:28.544 "keep_alive_timeout_ms": 10000, 00:17:28.544 "low_priority_weight": 0, 00:17:28.544 "medium_priority_weight": 0, 00:17:28.544 "nvme_adminq_poll_period_us": 10000, 00:17:28.544 "nvme_ioq_poll_period_us": 0, 00:17:28.544 "reconnect_delay_sec": 0, 00:17:28.544 "timeout_admin_us": 0, 00:17:28.544 "timeout_us": 0, 00:17:28.544 "transport_ack_timeout": 0, 00:17:28.544 "transport_retry_count": 4, 00:17:28.544 "transport_tos": 0 00:17:28.544 } 00:17:28.544 }, 00:17:28.544 { 00:17:28.544 "method": "bdev_nvme_attach_controller", 00:17:28.544 "params": { 00:17:28.544 "adrfam": "IPv4", 00:17:28.544 "ctrlr_loss_timeout_sec": 0, 00:17:28.544 "ddgst": false, 00:17:28.544 "fast_io_fail_timeout_sec": 0, 00:17:28.544 "hdgst": false, 00:17:28.544 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:28.544 "name": "TLSTEST", 00:17:28.544 "prchk_guard": false, 00:17:28.544 "prchk_reftag": false, 00:17:28.544 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:17:28.544 "reconnect_delay_sec": 0, 00:17:28.544 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:28.544 "traddr": "10.0.0.2", 00:17:28.544 "trsvcid": "4420", 00:17:28.544 "trtype": "TCP" 00:17:28.544 } 00:17:28.544 }, 00:17:28.544 { 00:17:28.544 "method": "bdev_nvme_set_hotplug", 00:17:28.544 "params": { 00:17:28.544 "enable": false, 00:17:28.544 "period_us": 100000 00:17:28.544 } 00:17:28.544 }, 00:17:28.544 { 00:17:28.545 "method": "bdev_wait_for_examine" 00:17:28.545 } 00:17:28.545 ] 00:17:28.545 }, 00:17:28.545 { 00:17:28.545 "subsystem": "nbd", 00:17:28.545 "config": [] 00:17:28.545 } 00:17:28.545 ] 00:17:28.545 }' 00:17:28.545 02:14:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:28.545 02:14:42 -- common/autotest_common.sh@10 -- # set +x 00:17:28.545 [2024-05-14 02:14:42.999888] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:17:28.545 [2024-05-14 02:14:42.999992] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77089 ] 00:17:28.803 [2024-05-14 02:14:43.135583] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:28.803 [2024-05-14 02:14:43.215596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:28.803 [2024-05-14 02:14:43.341493] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:29.755 02:14:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:29.755 02:14:43 -- common/autotest_common.sh@852 -- # return 0 00:17:29.755 02:14:43 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:29.755 Running I/O for 10 seconds... 00:17:39.728 00:17:39.728 Latency(us) 00:17:39.728 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.728 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:39.728 Verification LBA range: start 0x0 length 0x2000 00:17:39.728 TLSTESTn1 : 10.02 5220.73 20.39 0.00 0.00 24474.81 4706.68 26452.71 00:17:39.728 =================================================================================================================== 00:17:39.728 Total : 5220.73 20.39 0.00 0.00 24474.81 4706.68 26452.71 00:17:39.728 0 00:17:39.728 02:14:54 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:39.728 02:14:54 -- target/tls.sh@223 -- # killprocess 77089 00:17:39.728 02:14:54 -- common/autotest_common.sh@926 -- # '[' -z 77089 ']' 00:17:39.728 02:14:54 -- common/autotest_common.sh@930 -- # kill -0 77089 00:17:39.728 02:14:54 -- common/autotest_common.sh@931 -- # uname 00:17:39.728 02:14:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:39.728 02:14:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77089 00:17:39.728 02:14:54 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:39.728 killing process with pid 77089 00:17:39.728 02:14:54 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:39.728 02:14:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77089' 00:17:39.728 Received shutdown signal, test time was about 10.000000 seconds 00:17:39.728 00:17:39.728 Latency(us) 00:17:39.728 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.728 =================================================================================================================== 00:17:39.728 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:39.728 02:14:54 -- common/autotest_common.sh@945 -- # kill 77089 00:17:39.728 02:14:54 -- common/autotest_common.sh@950 -- # wait 77089 00:17:39.985 02:14:54 -- target/tls.sh@224 -- # killprocess 77045 00:17:39.985 02:14:54 -- common/autotest_common.sh@926 -- # '[' -z 77045 ']' 00:17:39.985 02:14:54 -- common/autotest_common.sh@930 -- # kill -0 77045 00:17:39.985 02:14:54 -- common/autotest_common.sh@931 -- # uname 00:17:39.985 02:14:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:39.985 02:14:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77045 00:17:39.985 02:14:54 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:39.985 killing process with pid 77045 00:17:39.985 02:14:54 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:39.985 02:14:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77045' 00:17:39.985 02:14:54 -- common/autotest_common.sh@945 -- # kill 77045 00:17:39.985 02:14:54 -- common/autotest_common.sh@950 -- # wait 77045 00:17:40.244 02:14:54 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:17:40.244 02:14:54 -- target/tls.sh@227 -- # cleanup 00:17:40.244 02:14:54 -- target/tls.sh@15 -- # process_shm --id 0 00:17:40.244 02:14:54 -- common/autotest_common.sh@796 -- # type=--id 00:17:40.244 02:14:54 -- common/autotest_common.sh@797 -- # id=0 00:17:40.244 02:14:54 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:17:40.244 02:14:54 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:40.244 02:14:54 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:17:40.244 02:14:54 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:17:40.244 02:14:54 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:17:40.244 02:14:54 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:40.244 nvmf_trace.0 00:17:40.244 02:14:54 -- common/autotest_common.sh@811 -- # return 0 00:17:40.244 02:14:54 -- target/tls.sh@16 -- # killprocess 77089 00:17:40.244 02:14:54 -- common/autotest_common.sh@926 -- # '[' -z 77089 ']' 00:17:40.244 02:14:54 -- common/autotest_common.sh@930 -- # kill -0 77089 00:17:40.244 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (77089) - No such process 00:17:40.244 Process with pid 77089 is not found 00:17:40.244 02:14:54 -- common/autotest_common.sh@953 -- # echo 'Process with pid 77089 is not found' 00:17:40.244 02:14:54 -- target/tls.sh@17 -- # nvmftestfini 00:17:40.244 02:14:54 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:40.244 02:14:54 -- nvmf/common.sh@116 -- # sync 00:17:40.244 02:14:54 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:40.244 02:14:54 -- nvmf/common.sh@119 -- # set +e 00:17:40.244 02:14:54 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:40.244 02:14:54 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:40.244 rmmod nvme_tcp 00:17:40.244 rmmod nvme_fabrics 00:17:40.244 rmmod nvme_keyring 00:17:40.244 02:14:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:40.244 02:14:54 -- nvmf/common.sh@123 -- # set -e 00:17:40.244 02:14:54 -- nvmf/common.sh@124 -- # return 0 00:17:40.244 02:14:54 -- nvmf/common.sh@477 -- # '[' -n 77045 ']' 00:17:40.244 02:14:54 -- nvmf/common.sh@478 -- # killprocess 77045 00:17:40.244 02:14:54 -- common/autotest_common.sh@926 -- # '[' -z 77045 ']' 00:17:40.244 02:14:54 -- common/autotest_common.sh@930 -- # kill -0 77045 00:17:40.244 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (77045) - No such process 00:17:40.244 Process with pid 77045 is not found 00:17:40.244 02:14:54 -- common/autotest_common.sh@953 -- # echo 'Process with pid 77045 is not found' 00:17:40.244 02:14:54 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:40.244 02:14:54 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:40.244 02:14:54 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:40.244 02:14:54 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:40.244 02:14:54 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:40.244 02:14:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:40.244 02:14:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:40.244 02:14:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:40.244 02:14:54 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:40.244 02:14:54 -- target/tls.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:40.244 00:17:40.244 real 1m11.217s 00:17:40.244 user 1m51.720s 00:17:40.244 sys 0m23.911s 00:17:40.244 02:14:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:40.244 02:14:54 -- common/autotest_common.sh@10 -- # set +x 00:17:40.244 ************************************ 00:17:40.244 END TEST nvmf_tls 00:17:40.244 ************************************ 00:17:40.244 02:14:54 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:40.244 02:14:54 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:40.244 02:14:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:40.244 02:14:54 -- common/autotest_common.sh@10 -- # set +x 00:17:40.503 ************************************ 00:17:40.503 START TEST nvmf_fips 00:17:40.503 ************************************ 00:17:40.503 02:14:54 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:40.503 * Looking for test storage... 00:17:40.503 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:17:40.503 02:14:54 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:40.503 02:14:54 -- nvmf/common.sh@7 -- # uname -s 00:17:40.503 02:14:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:40.503 02:14:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:40.503 02:14:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:40.503 02:14:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:40.503 02:14:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:40.503 02:14:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:40.503 02:14:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:40.503 02:14:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:40.503 02:14:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:40.503 02:14:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:40.503 02:14:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:17:40.503 02:14:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:17:40.503 02:14:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:40.503 02:14:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:40.503 02:14:54 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:40.503 02:14:54 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:40.503 02:14:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:40.503 02:14:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:40.503 02:14:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:40.504 02:14:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.504 02:14:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.504 02:14:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.504 02:14:54 -- paths/export.sh@5 -- # export PATH 00:17:40.504 02:14:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.504 02:14:54 -- nvmf/common.sh@46 -- # : 0 00:17:40.504 02:14:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:40.504 02:14:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:40.504 02:14:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:40.504 02:14:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:40.504 02:14:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:40.504 02:14:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:40.504 02:14:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:40.504 02:14:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:40.504 02:14:54 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:40.504 02:14:54 -- fips/fips.sh@89 -- # check_openssl_version 00:17:40.504 02:14:54 -- fips/fips.sh@83 -- # local target=3.0.0 00:17:40.504 02:14:54 -- fips/fips.sh@85 -- # openssl version 00:17:40.504 02:14:54 -- fips/fips.sh@85 -- # awk '{print $2}' 00:17:40.504 02:14:54 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:17:40.504 02:14:54 -- scripts/common.sh@375 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:17:40.504 02:14:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:40.504 02:14:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:40.504 02:14:54 -- scripts/common.sh@335 -- # IFS=.-: 00:17:40.504 02:14:54 -- scripts/common.sh@335 -- # read -ra ver1 00:17:40.504 02:14:54 -- scripts/common.sh@336 -- # IFS=.-: 00:17:40.504 02:14:54 -- scripts/common.sh@336 -- # read -ra ver2 00:17:40.504 02:14:54 -- scripts/common.sh@337 -- # local 'op=>=' 00:17:40.504 02:14:54 -- scripts/common.sh@339 -- # ver1_l=3 00:17:40.504 02:14:54 -- scripts/common.sh@340 -- # ver2_l=3 00:17:40.504 02:14:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:40.504 02:14:54 -- scripts/common.sh@343 -- # case "$op" in 00:17:40.504 02:14:54 -- scripts/common.sh@347 -- # : 1 00:17:40.504 02:14:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:40.504 02:14:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:40.504 02:14:54 -- scripts/common.sh@364 -- # decimal 3 00:17:40.504 02:14:54 -- scripts/common.sh@352 -- # local d=3 00:17:40.504 02:14:54 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:40.504 02:14:54 -- scripts/common.sh@354 -- # echo 3 00:17:40.504 02:14:54 -- scripts/common.sh@364 -- # ver1[v]=3 00:17:40.504 02:14:54 -- scripts/common.sh@365 -- # decimal 3 00:17:40.504 02:14:54 -- scripts/common.sh@352 -- # local d=3 00:17:40.504 02:14:54 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:40.504 02:14:54 -- scripts/common.sh@354 -- # echo 3 00:17:40.504 02:14:54 -- scripts/common.sh@365 -- # ver2[v]=3 00:17:40.504 02:14:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:40.504 02:14:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:40.504 02:14:54 -- scripts/common.sh@363 -- # (( v++ )) 00:17:40.504 02:14:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:40.504 02:14:54 -- scripts/common.sh@364 -- # decimal 0 00:17:40.504 02:14:54 -- scripts/common.sh@352 -- # local d=0 00:17:40.504 02:14:54 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:40.504 02:14:54 -- scripts/common.sh@354 -- # echo 0 00:17:40.504 02:14:54 -- scripts/common.sh@364 -- # ver1[v]=0 00:17:40.504 02:14:54 -- scripts/common.sh@365 -- # decimal 0 00:17:40.504 02:14:54 -- scripts/common.sh@352 -- # local d=0 00:17:40.504 02:14:54 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:40.504 02:14:54 -- scripts/common.sh@354 -- # echo 0 00:17:40.504 02:14:54 -- scripts/common.sh@365 -- # ver2[v]=0 00:17:40.504 02:14:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:40.504 02:14:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:40.504 02:14:54 -- scripts/common.sh@363 -- # (( v++ )) 00:17:40.504 02:14:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:40.504 02:14:54 -- scripts/common.sh@364 -- # decimal 9 00:17:40.504 02:14:54 -- scripts/common.sh@352 -- # local d=9 00:17:40.504 02:14:54 -- scripts/common.sh@353 -- # [[ 9 =~ ^[0-9]+$ ]] 00:17:40.504 02:14:54 -- scripts/common.sh@354 -- # echo 9 00:17:40.504 02:14:54 -- scripts/common.sh@364 -- # ver1[v]=9 00:17:40.504 02:14:54 -- scripts/common.sh@365 -- # decimal 0 00:17:40.504 02:14:54 -- scripts/common.sh@352 -- # local d=0 00:17:40.504 02:14:54 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:40.504 02:14:54 -- scripts/common.sh@354 -- # echo 0 00:17:40.504 02:14:54 -- scripts/common.sh@365 -- # ver2[v]=0 00:17:40.504 02:14:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:40.504 02:14:54 -- scripts/common.sh@366 -- # return 0 00:17:40.504 02:14:54 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:17:40.504 02:14:54 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:17:40.504 02:14:54 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:17:40.504 02:14:55 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:17:40.504 02:14:55 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:17:40.504 02:14:55 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:17:40.504 02:14:55 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:17:40.504 02:14:55 -- fips/fips.sh@105 -- # export OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:17:40.504 02:14:55 -- fips/fips.sh@105 -- # OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:17:40.504 02:14:55 -- fips/fips.sh@114 -- # build_openssl_config 00:17:40.504 02:14:55 -- fips/fips.sh@37 -- # cat 00:17:40.504 02:14:55 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:17:40.504 02:14:55 -- fips/fips.sh@58 -- # cat - 00:17:40.504 02:14:55 -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:17:40.504 02:14:55 -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:17:40.504 02:14:55 -- fips/fips.sh@117 -- # mapfile -t providers 00:17:40.504 02:14:55 -- fips/fips.sh@117 -- # OPENSSL_CONF=spdk_fips.conf 00:17:40.504 02:14:55 -- fips/fips.sh@117 -- # openssl list -providers 00:17:40.504 02:14:55 -- fips/fips.sh@117 -- # grep name 00:17:40.505 02:14:55 -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:17:40.505 02:14:55 -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:17:40.505 02:14:55 -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:17:40.505 02:14:55 -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:17:40.505 02:14:55 -- fips/fips.sh@128 -- # : 00:17:40.505 02:14:55 -- common/autotest_common.sh@640 -- # local es=0 00:17:40.505 02:14:55 -- common/autotest_common.sh@642 -- # valid_exec_arg openssl md5 /dev/fd/62 00:17:40.505 02:14:55 -- common/autotest_common.sh@628 -- # local arg=openssl 00:17:40.505 02:14:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:40.505 02:14:55 -- common/autotest_common.sh@632 -- # type -t openssl 00:17:40.505 02:14:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:40.505 02:14:55 -- common/autotest_common.sh@634 -- # type -P openssl 00:17:40.505 02:14:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:40.505 02:14:55 -- common/autotest_common.sh@634 -- # arg=/usr/bin/openssl 00:17:40.505 02:14:55 -- common/autotest_common.sh@634 -- # [[ -x /usr/bin/openssl ]] 00:17:40.505 02:14:55 -- common/autotest_common.sh@643 -- # openssl md5 /dev/fd/62 00:17:40.762 Error setting digest 00:17:40.762 002238367A7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:17:40.762 002238367A7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:17:40.762 02:14:55 -- common/autotest_common.sh@643 -- # es=1 00:17:40.762 02:14:55 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:40.762 02:14:55 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:40.762 02:14:55 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:40.762 02:14:55 -- fips/fips.sh@131 -- # nvmftestinit 00:17:40.762 02:14:55 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:40.762 02:14:55 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:40.762 02:14:55 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:40.762 02:14:55 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:40.762 02:14:55 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:40.762 02:14:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:40.762 02:14:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:40.762 02:14:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:40.762 02:14:55 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:40.762 02:14:55 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:40.762 02:14:55 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:40.762 02:14:55 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:40.762 02:14:55 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:40.762 02:14:55 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:40.762 02:14:55 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:40.762 02:14:55 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:40.762 02:14:55 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:40.762 02:14:55 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:40.762 02:14:55 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:40.762 02:14:55 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:40.762 02:14:55 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:40.762 02:14:55 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:40.762 02:14:55 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:40.762 02:14:55 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:40.762 02:14:55 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:40.762 02:14:55 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:40.762 02:14:55 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:40.762 02:14:55 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:40.762 Cannot find device "nvmf_tgt_br" 00:17:40.762 02:14:55 -- nvmf/common.sh@154 -- # true 00:17:40.762 02:14:55 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:40.762 Cannot find device "nvmf_tgt_br2" 00:17:40.762 02:14:55 -- nvmf/common.sh@155 -- # true 00:17:40.762 02:14:55 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:40.762 02:14:55 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:40.762 Cannot find device "nvmf_tgt_br" 00:17:40.762 02:14:55 -- nvmf/common.sh@157 -- # true 00:17:40.762 02:14:55 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:40.762 Cannot find device "nvmf_tgt_br2" 00:17:40.762 02:14:55 -- nvmf/common.sh@158 -- # true 00:17:40.762 02:14:55 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:40.762 02:14:55 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:40.762 02:14:55 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:40.762 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:40.762 02:14:55 -- nvmf/common.sh@161 -- # true 00:17:40.762 02:14:55 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:40.762 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:40.762 02:14:55 -- nvmf/common.sh@162 -- # true 00:17:40.762 02:14:55 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:40.762 02:14:55 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:40.762 02:14:55 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:40.762 02:14:55 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:40.762 02:14:55 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:41.020 02:14:55 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:41.020 02:14:55 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:41.020 02:14:55 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:41.020 02:14:55 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:41.020 02:14:55 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:41.020 02:14:55 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:41.020 02:14:55 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:41.020 02:14:55 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:41.020 02:14:55 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:41.020 02:14:55 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:41.020 02:14:55 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:41.020 02:14:55 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:41.020 02:14:55 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:41.020 02:14:55 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:41.020 02:14:55 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:41.020 02:14:55 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:41.020 02:14:55 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:41.020 02:14:55 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:41.020 02:14:55 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:41.020 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:41.020 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.110 ms 00:17:41.020 00:17:41.020 --- 10.0.0.2 ping statistics --- 00:17:41.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:41.020 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:17:41.020 02:14:55 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:41.020 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:41.020 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:17:41.020 00:17:41.020 --- 10.0.0.3 ping statistics --- 00:17:41.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:41.020 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:17:41.020 02:14:55 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:41.020 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:41.020 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:17:41.020 00:17:41.020 --- 10.0.0.1 ping statistics --- 00:17:41.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:41.020 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:17:41.020 02:14:55 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:41.020 02:14:55 -- nvmf/common.sh@421 -- # return 0 00:17:41.020 02:14:55 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:41.020 02:14:55 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:41.020 02:14:55 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:41.020 02:14:55 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:41.020 02:14:55 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:41.020 02:14:55 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:41.020 02:14:55 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:41.020 02:14:55 -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:17:41.020 02:14:55 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:41.020 02:14:55 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:41.020 02:14:55 -- common/autotest_common.sh@10 -- # set +x 00:17:41.020 02:14:55 -- nvmf/common.sh@469 -- # nvmfpid=77458 00:17:41.020 02:14:55 -- nvmf/common.sh@470 -- # waitforlisten 77458 00:17:41.020 02:14:55 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:41.020 02:14:55 -- common/autotest_common.sh@819 -- # '[' -z 77458 ']' 00:17:41.020 02:14:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:41.020 02:14:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:41.020 02:14:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:41.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:41.020 02:14:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:41.020 02:14:55 -- common/autotest_common.sh@10 -- # set +x 00:17:41.279 [2024-05-14 02:14:55.609584] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:17:41.279 [2024-05-14 02:14:55.609684] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:41.279 [2024-05-14 02:14:55.748821] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.279 [2024-05-14 02:14:55.830381] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:41.279 [2024-05-14 02:14:55.830571] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:41.279 [2024-05-14 02:14:55.830592] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:41.279 [2024-05-14 02:14:55.830605] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:41.279 [2024-05-14 02:14:55.830655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:42.213 02:14:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:42.213 02:14:56 -- common/autotest_common.sh@852 -- # return 0 00:17:42.213 02:14:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:42.213 02:14:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:42.213 02:14:56 -- common/autotest_common.sh@10 -- # set +x 00:17:42.213 02:14:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:42.213 02:14:56 -- fips/fips.sh@134 -- # trap cleanup EXIT 00:17:42.213 02:14:56 -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:42.213 02:14:56 -- fips/fips.sh@138 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:42.213 02:14:56 -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:42.213 02:14:56 -- fips/fips.sh@140 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:42.213 02:14:56 -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:42.213 02:14:56 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:42.213 02:14:56 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:42.471 [2024-05-14 02:14:56.857495] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:42.471 [2024-05-14 02:14:56.873451] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:42.471 [2024-05-14 02:14:56.873633] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:42.471 malloc0 00:17:42.471 02:14:56 -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:42.471 02:14:56 -- fips/fips.sh@148 -- # bdevperf_pid=77509 00:17:42.471 02:14:56 -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:42.471 02:14:56 -- fips/fips.sh@149 -- # waitforlisten 77509 /var/tmp/bdevperf.sock 00:17:42.471 02:14:56 -- common/autotest_common.sh@819 -- # '[' -z 77509 ']' 00:17:42.471 02:14:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:42.471 02:14:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:42.471 02:14:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:42.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:42.471 02:14:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:42.471 02:14:56 -- common/autotest_common.sh@10 -- # set +x 00:17:42.471 [2024-05-14 02:14:57.011370] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:17:42.471 [2024-05-14 02:14:57.011469] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77509 ] 00:17:42.730 [2024-05-14 02:14:57.153003] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:42.730 [2024-05-14 02:14:57.224486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:43.666 02:14:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:43.666 02:14:57 -- common/autotest_common.sh@852 -- # return 0 00:17:43.666 02:14:57 -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:43.666 [2024-05-14 02:14:58.202051] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:43.924 TLSTESTn1 00:17:43.924 02:14:58 -- fips/fips.sh@155 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:43.924 Running I/O for 10 seconds... 00:17:53.900 00:17:53.900 Latency(us) 00:17:53.900 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.900 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:53.900 Verification LBA range: start 0x0 length 0x2000 00:17:53.900 TLSTESTn1 : 10.02 5242.68 20.48 0.00 0.00 24373.12 5570.56 27405.96 00:17:53.900 =================================================================================================================== 00:17:53.900 Total : 5242.68 20.48 0.00 0.00 24373.12 5570.56 27405.96 00:17:53.900 0 00:17:53.900 02:15:08 -- fips/fips.sh@1 -- # cleanup 00:17:53.900 02:15:08 -- fips/fips.sh@15 -- # process_shm --id 0 00:17:53.900 02:15:08 -- common/autotest_common.sh@796 -- # type=--id 00:17:53.900 02:15:08 -- common/autotest_common.sh@797 -- # id=0 00:17:53.900 02:15:08 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:17:53.900 02:15:08 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:53.900 02:15:08 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:17:53.900 02:15:08 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:17:53.900 02:15:08 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:17:53.900 02:15:08 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:53.900 nvmf_trace.0 00:17:54.159 02:15:08 -- common/autotest_common.sh@811 -- # return 0 00:17:54.159 02:15:08 -- fips/fips.sh@16 -- # killprocess 77509 00:17:54.159 02:15:08 -- common/autotest_common.sh@926 -- # '[' -z 77509 ']' 00:17:54.159 02:15:08 -- common/autotest_common.sh@930 -- # kill -0 77509 00:17:54.159 02:15:08 -- common/autotest_common.sh@931 -- # uname 00:17:54.159 02:15:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:54.159 02:15:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77509 00:17:54.159 killing process with pid 77509 00:17:54.159 Received shutdown signal, test time was about 10.000000 seconds 00:17:54.159 00:17:54.159 Latency(us) 00:17:54.159 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:54.159 =================================================================================================================== 00:17:54.159 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:54.159 02:15:08 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:54.159 02:15:08 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:54.159 02:15:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77509' 00:17:54.159 02:15:08 -- common/autotest_common.sh@945 -- # kill 77509 00:17:54.159 02:15:08 -- common/autotest_common.sh@950 -- # wait 77509 00:17:54.159 02:15:08 -- fips/fips.sh@17 -- # nvmftestfini 00:17:54.159 02:15:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:54.159 02:15:08 -- nvmf/common.sh@116 -- # sync 00:17:54.417 02:15:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:54.417 02:15:08 -- nvmf/common.sh@119 -- # set +e 00:17:54.417 02:15:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:54.417 02:15:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:54.417 rmmod nvme_tcp 00:17:54.417 rmmod nvme_fabrics 00:17:54.417 rmmod nvme_keyring 00:17:54.417 02:15:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:54.418 02:15:08 -- nvmf/common.sh@123 -- # set -e 00:17:54.418 02:15:08 -- nvmf/common.sh@124 -- # return 0 00:17:54.418 02:15:08 -- nvmf/common.sh@477 -- # '[' -n 77458 ']' 00:17:54.418 02:15:08 -- nvmf/common.sh@478 -- # killprocess 77458 00:17:54.418 02:15:08 -- common/autotest_common.sh@926 -- # '[' -z 77458 ']' 00:17:54.418 02:15:08 -- common/autotest_common.sh@930 -- # kill -0 77458 00:17:54.418 02:15:08 -- common/autotest_common.sh@931 -- # uname 00:17:54.418 02:15:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:54.418 02:15:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77458 00:17:54.418 killing process with pid 77458 00:17:54.418 02:15:08 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:54.418 02:15:08 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:54.418 02:15:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77458' 00:17:54.418 02:15:08 -- common/autotest_common.sh@945 -- # kill 77458 00:17:54.418 02:15:08 -- common/autotest_common.sh@950 -- # wait 77458 00:17:54.677 02:15:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:54.677 02:15:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:54.677 02:15:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:54.677 02:15:09 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:54.677 02:15:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:54.677 02:15:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:54.677 02:15:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:54.677 02:15:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:54.677 02:15:09 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:54.677 02:15:09 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:54.677 ************************************ 00:17:54.677 END TEST nvmf_fips 00:17:54.677 ************************************ 00:17:54.677 00:17:54.677 real 0m14.246s 00:17:54.677 user 0m18.850s 00:17:54.677 sys 0m6.049s 00:17:54.677 02:15:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:54.677 02:15:09 -- common/autotest_common.sh@10 -- # set +x 00:17:54.677 02:15:09 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:17:54.677 02:15:09 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:17:54.677 02:15:09 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:54.677 02:15:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:54.677 02:15:09 -- common/autotest_common.sh@10 -- # set +x 00:17:54.677 ************************************ 00:17:54.677 START TEST nvmf_fuzz 00:17:54.677 ************************************ 00:17:54.677 02:15:09 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:17:54.677 * Looking for test storage... 00:17:54.677 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:54.677 02:15:09 -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:54.677 02:15:09 -- nvmf/common.sh@7 -- # uname -s 00:17:54.677 02:15:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:54.677 02:15:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:54.677 02:15:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:54.677 02:15:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:54.677 02:15:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:54.677 02:15:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:54.677 02:15:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:54.677 02:15:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:54.677 02:15:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:54.677 02:15:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:54.677 02:15:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:17:54.677 02:15:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:17:54.677 02:15:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:54.677 02:15:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:54.677 02:15:09 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:54.677 02:15:09 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:54.677 02:15:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:54.677 02:15:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:54.677 02:15:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:54.677 02:15:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.677 02:15:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.677 02:15:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.677 02:15:09 -- paths/export.sh@5 -- # export PATH 00:17:54.677 02:15:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.678 02:15:09 -- nvmf/common.sh@46 -- # : 0 00:17:54.678 02:15:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:54.678 02:15:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:54.678 02:15:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:54.678 02:15:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:54.678 02:15:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:54.678 02:15:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:54.678 02:15:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:54.678 02:15:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:54.678 02:15:09 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:17:54.678 02:15:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:54.678 02:15:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:54.678 02:15:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:54.678 02:15:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:54.678 02:15:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:54.678 02:15:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:54.678 02:15:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:54.678 02:15:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:54.678 02:15:09 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:54.678 02:15:09 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:54.678 02:15:09 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:54.678 02:15:09 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:54.678 02:15:09 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:54.678 02:15:09 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:54.678 02:15:09 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:54.678 02:15:09 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:54.678 02:15:09 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:54.678 02:15:09 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:54.678 02:15:09 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:54.678 02:15:09 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:54.678 02:15:09 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:54.678 02:15:09 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:54.678 02:15:09 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:54.678 02:15:09 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:54.678 02:15:09 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:54.678 02:15:09 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:54.678 02:15:09 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:54.935 02:15:09 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:54.935 Cannot find device "nvmf_tgt_br" 00:17:54.935 02:15:09 -- nvmf/common.sh@154 -- # true 00:17:54.935 02:15:09 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:54.935 Cannot find device "nvmf_tgt_br2" 00:17:54.935 02:15:09 -- nvmf/common.sh@155 -- # true 00:17:54.935 02:15:09 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:54.935 02:15:09 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:54.935 Cannot find device "nvmf_tgt_br" 00:17:54.935 02:15:09 -- nvmf/common.sh@157 -- # true 00:17:54.935 02:15:09 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:54.935 Cannot find device "nvmf_tgt_br2" 00:17:54.935 02:15:09 -- nvmf/common.sh@158 -- # true 00:17:54.935 02:15:09 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:54.935 02:15:09 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:54.935 02:15:09 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:54.935 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:54.935 02:15:09 -- nvmf/common.sh@161 -- # true 00:17:54.935 02:15:09 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:54.935 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:54.935 02:15:09 -- nvmf/common.sh@162 -- # true 00:17:54.935 02:15:09 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:54.935 02:15:09 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:54.935 02:15:09 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:54.935 02:15:09 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:54.935 02:15:09 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:54.935 02:15:09 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:54.935 02:15:09 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:54.935 02:15:09 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:54.935 02:15:09 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:54.935 02:15:09 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:54.935 02:15:09 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:54.935 02:15:09 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:54.935 02:15:09 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:54.935 02:15:09 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:54.935 02:15:09 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:54.935 02:15:09 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:54.935 02:15:09 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:54.935 02:15:09 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:55.194 02:15:09 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:55.194 02:15:09 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:55.194 02:15:09 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:55.194 02:15:09 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:55.194 02:15:09 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:55.194 02:15:09 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:55.194 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:55.194 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.102 ms 00:17:55.194 00:17:55.194 --- 10.0.0.2 ping statistics --- 00:17:55.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:55.194 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:17:55.194 02:15:09 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:55.194 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:55.194 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:17:55.194 00:17:55.194 --- 10.0.0.3 ping statistics --- 00:17:55.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:55.194 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:17:55.194 02:15:09 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:55.194 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:55.194 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:17:55.194 00:17:55.194 --- 10.0.0.1 ping statistics --- 00:17:55.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:55.194 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:17:55.194 02:15:09 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:55.194 02:15:09 -- nvmf/common.sh@421 -- # return 0 00:17:55.194 02:15:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:55.194 02:15:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:55.194 02:15:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:55.194 02:15:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:55.195 02:15:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:55.195 02:15:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:55.195 02:15:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:55.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:55.195 02:15:09 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=77856 00:17:55.195 02:15:09 -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:55.195 02:15:09 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:55.195 02:15:09 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 77856 00:17:55.195 02:15:09 -- common/autotest_common.sh@819 -- # '[' -z 77856 ']' 00:17:55.195 02:15:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:55.195 02:15:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:55.195 02:15:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:55.195 02:15:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:55.195 02:15:09 -- common/autotest_common.sh@10 -- # set +x 00:17:56.138 02:15:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:56.138 02:15:10 -- common/autotest_common.sh@852 -- # return 0 00:17:56.138 02:15:10 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:56.138 02:15:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:56.138 02:15:10 -- common/autotest_common.sh@10 -- # set +x 00:17:56.138 02:15:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:56.138 02:15:10 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:17:56.138 02:15:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:56.138 02:15:10 -- common/autotest_common.sh@10 -- # set +x 00:17:56.138 Malloc0 00:17:56.138 02:15:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:56.138 02:15:10 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:56.138 02:15:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:56.138 02:15:10 -- common/autotest_common.sh@10 -- # set +x 00:17:56.138 02:15:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:56.138 02:15:10 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:56.138 02:15:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:56.138 02:15:10 -- common/autotest_common.sh@10 -- # set +x 00:17:56.138 02:15:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:56.138 02:15:10 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:56.138 02:15:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:56.138 02:15:10 -- common/autotest_common.sh@10 -- # set +x 00:17:56.138 02:15:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:56.138 02:15:10 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:17:56.138 02:15:10 -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:17:56.704 Shutting down the fuzz application 00:17:56.704 02:15:11 -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:17:56.962 Shutting down the fuzz application 00:17:56.962 02:15:11 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:56.962 02:15:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:56.962 02:15:11 -- common/autotest_common.sh@10 -- # set +x 00:17:56.962 02:15:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:56.962 02:15:11 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:17:56.962 02:15:11 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:17:56.962 02:15:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:56.962 02:15:11 -- nvmf/common.sh@116 -- # sync 00:17:56.962 02:15:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:56.962 02:15:11 -- nvmf/common.sh@119 -- # set +e 00:17:56.962 02:15:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:56.962 02:15:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:56.962 rmmod nvme_tcp 00:17:56.962 rmmod nvme_fabrics 00:17:56.963 rmmod nvme_keyring 00:17:56.963 02:15:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:57.222 02:15:11 -- nvmf/common.sh@123 -- # set -e 00:17:57.222 02:15:11 -- nvmf/common.sh@124 -- # return 0 00:17:57.222 02:15:11 -- nvmf/common.sh@477 -- # '[' -n 77856 ']' 00:17:57.222 02:15:11 -- nvmf/common.sh@478 -- # killprocess 77856 00:17:57.222 02:15:11 -- common/autotest_common.sh@926 -- # '[' -z 77856 ']' 00:17:57.222 02:15:11 -- common/autotest_common.sh@930 -- # kill -0 77856 00:17:57.222 02:15:11 -- common/autotest_common.sh@931 -- # uname 00:17:57.222 02:15:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:57.222 02:15:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77856 00:17:57.222 killing process with pid 77856 00:17:57.222 02:15:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:57.222 02:15:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:57.222 02:15:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77856' 00:17:57.222 02:15:11 -- common/autotest_common.sh@945 -- # kill 77856 00:17:57.222 02:15:11 -- common/autotest_common.sh@950 -- # wait 77856 00:17:57.222 02:15:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:57.222 02:15:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:57.222 02:15:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:57.222 02:15:11 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:57.222 02:15:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:57.222 02:15:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:57.222 02:15:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:57.222 02:15:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:57.481 02:15:11 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:57.481 02:15:11 -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:17:57.481 ************************************ 00:17:57.481 END TEST nvmf_fuzz 00:17:57.481 ************************************ 00:17:57.481 00:17:57.481 real 0m2.693s 00:17:57.481 user 0m2.867s 00:17:57.481 sys 0m0.573s 00:17:57.481 02:15:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:57.481 02:15:11 -- common/autotest_common.sh@10 -- # set +x 00:17:57.481 02:15:11 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:17:57.481 02:15:11 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:57.481 02:15:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:57.481 02:15:11 -- common/autotest_common.sh@10 -- # set +x 00:17:57.481 ************************************ 00:17:57.481 START TEST nvmf_multiconnection 00:17:57.481 ************************************ 00:17:57.481 02:15:11 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:17:57.481 * Looking for test storage... 00:17:57.481 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:57.481 02:15:11 -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:57.481 02:15:11 -- nvmf/common.sh@7 -- # uname -s 00:17:57.481 02:15:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:57.481 02:15:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:57.481 02:15:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:57.481 02:15:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:57.481 02:15:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:57.481 02:15:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:57.481 02:15:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:57.481 02:15:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:57.481 02:15:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:57.481 02:15:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:57.481 02:15:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:17:57.481 02:15:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:17:57.481 02:15:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:57.481 02:15:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:57.481 02:15:11 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:57.481 02:15:11 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:57.481 02:15:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:57.481 02:15:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:57.481 02:15:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:57.481 02:15:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.481 02:15:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.481 02:15:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.481 02:15:11 -- paths/export.sh@5 -- # export PATH 00:17:57.481 02:15:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.481 02:15:11 -- nvmf/common.sh@46 -- # : 0 00:17:57.481 02:15:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:57.481 02:15:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:57.481 02:15:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:57.481 02:15:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:57.481 02:15:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:57.481 02:15:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:57.481 02:15:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:57.481 02:15:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:57.481 02:15:11 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:57.481 02:15:11 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:57.481 02:15:11 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:17:57.481 02:15:11 -- target/multiconnection.sh@16 -- # nvmftestinit 00:17:57.481 02:15:11 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:57.481 02:15:11 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:57.481 02:15:11 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:57.481 02:15:11 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:57.481 02:15:11 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:57.481 02:15:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:57.481 02:15:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:57.481 02:15:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:57.481 02:15:11 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:57.481 02:15:11 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:57.481 02:15:11 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:57.481 02:15:11 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:57.481 02:15:11 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:57.481 02:15:11 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:57.481 02:15:11 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:57.481 02:15:11 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:57.481 02:15:11 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:57.481 02:15:11 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:57.481 02:15:11 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:57.481 02:15:11 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:57.481 02:15:11 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:57.481 02:15:11 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:57.481 02:15:11 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:57.481 02:15:11 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:57.481 02:15:11 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:57.481 02:15:11 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:57.481 02:15:11 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:57.481 02:15:12 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:57.481 Cannot find device "nvmf_tgt_br" 00:17:57.481 02:15:12 -- nvmf/common.sh@154 -- # true 00:17:57.481 02:15:12 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:57.481 Cannot find device "nvmf_tgt_br2" 00:17:57.481 02:15:12 -- nvmf/common.sh@155 -- # true 00:17:57.481 02:15:12 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:57.481 02:15:12 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:57.481 Cannot find device "nvmf_tgt_br" 00:17:57.482 02:15:12 -- nvmf/common.sh@157 -- # true 00:17:57.482 02:15:12 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:57.482 Cannot find device "nvmf_tgt_br2" 00:17:57.482 02:15:12 -- nvmf/common.sh@158 -- # true 00:17:57.482 02:15:12 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:57.740 02:15:12 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:57.740 02:15:12 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:57.740 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:57.740 02:15:12 -- nvmf/common.sh@161 -- # true 00:17:57.740 02:15:12 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:57.740 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:57.740 02:15:12 -- nvmf/common.sh@162 -- # true 00:17:57.740 02:15:12 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:57.740 02:15:12 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:57.740 02:15:12 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:57.740 02:15:12 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:57.740 02:15:12 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:57.740 02:15:12 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:57.740 02:15:12 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:57.740 02:15:12 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:57.740 02:15:12 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:57.740 02:15:12 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:57.740 02:15:12 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:57.740 02:15:12 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:57.740 02:15:12 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:57.740 02:15:12 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:57.740 02:15:12 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:57.740 02:15:12 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:57.740 02:15:12 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:57.740 02:15:12 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:57.740 02:15:12 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:57.740 02:15:12 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:57.740 02:15:12 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:57.740 02:15:12 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:57.740 02:15:12 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:57.740 02:15:12 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:57.740 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:57.740 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:17:57.740 00:17:57.740 --- 10.0.0.2 ping statistics --- 00:17:57.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.740 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:17:57.740 02:15:12 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:57.740 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:57.740 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:17:57.740 00:17:57.740 --- 10.0.0.3 ping statistics --- 00:17:57.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.740 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:17:57.740 02:15:12 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:57.740 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:57.740 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:17:57.740 00:17:57.740 --- 10.0.0.1 ping statistics --- 00:17:57.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.740 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:17:57.740 02:15:12 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:57.740 02:15:12 -- nvmf/common.sh@421 -- # return 0 00:17:57.740 02:15:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:57.740 02:15:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:57.740 02:15:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:57.740 02:15:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:57.740 02:15:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:57.740 02:15:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:57.740 02:15:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:57.740 02:15:12 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:17:57.740 02:15:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:57.740 02:15:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:57.740 02:15:12 -- common/autotest_common.sh@10 -- # set +x 00:17:57.740 02:15:12 -- nvmf/common.sh@469 -- # nvmfpid=78060 00:17:57.740 02:15:12 -- nvmf/common.sh@470 -- # waitforlisten 78060 00:17:57.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:57.740 02:15:12 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:57.740 02:15:12 -- common/autotest_common.sh@819 -- # '[' -z 78060 ']' 00:17:57.740 02:15:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:57.740 02:15:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:57.740 02:15:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:57.740 02:15:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:57.740 02:15:12 -- common/autotest_common.sh@10 -- # set +x 00:17:57.999 [2024-05-14 02:15:12.373085] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:17:57.999 [2024-05-14 02:15:12.373176] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:57.999 [2024-05-14 02:15:12.512386] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:57.999 [2024-05-14 02:15:12.572142] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:57.999 [2024-05-14 02:15:12.572468] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:57.999 [2024-05-14 02:15:12.572593] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:57.999 [2024-05-14 02:15:12.572849] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:57.999 [2024-05-14 02:15:12.573101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:57.999 [2024-05-14 02:15:12.573219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:57.999 [2024-05-14 02:15:12.573258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:57.999 [2024-05-14 02:15:12.573259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:58.935 02:15:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:58.935 02:15:13 -- common/autotest_common.sh@852 -- # return 0 00:17:58.935 02:15:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:58.935 02:15:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:58.935 02:15:13 -- common/autotest_common.sh@10 -- # set +x 00:17:58.935 02:15:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:58.935 02:15:13 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:58.935 02:15:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:58.935 02:15:13 -- common/autotest_common.sh@10 -- # set +x 00:17:58.935 [2024-05-14 02:15:13.473131] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:58.935 02:15:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:58.935 02:15:13 -- target/multiconnection.sh@21 -- # seq 1 11 00:17:58.935 02:15:13 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:58.935 02:15:13 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:58.935 02:15:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:58.935 02:15:13 -- common/autotest_common.sh@10 -- # set +x 00:17:58.935 Malloc1 00:17:58.935 02:15:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:58.935 02:15:13 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:17:58.935 02:15:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:58.935 02:15:13 -- common/autotest_common.sh@10 -- # set +x 00:17:59.192 02:15:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:59.192 02:15:13 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:59.192 02:15:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:59.192 02:15:13 -- common/autotest_common.sh@10 -- # set +x 00:17:59.192 02:15:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:59.192 02:15:13 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:59.192 02:15:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:59.192 02:15:13 -- common/autotest_common.sh@10 -- # set +x 00:17:59.192 [2024-05-14 02:15:13.542308] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:59.192 02:15:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:59.192 02:15:13 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:59.192 02:15:13 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:17:59.192 02:15:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:59.192 02:15:13 -- common/autotest_common.sh@10 -- # set +x 00:17:59.192 Malloc2 00:17:59.192 02:15:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:59.192 02:15:13 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:17:59.192 02:15:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:59.192 02:15:13 -- common/autotest_common.sh@10 -- # set +x 00:17:59.192 02:15:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:59.192 02:15:13 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:17:59.192 02:15:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:59.192 02:15:13 -- common/autotest_common.sh@10 -- # set +x 00:17:59.192 02:15:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:59.192 02:15:13 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:17:59.192 02:15:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:59.192 02:15:13 -- common/autotest_common.sh@10 -- # set +x 00:17:59.192 02:15:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:59.192 02:15:13 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:59.192 02:15:13 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:17:59.192 02:15:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:59.192 02:15:13 -- common/autotest_common.sh@10 -- # set +x 00:17:59.192 Malloc3 00:17:59.192 02:15:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:59.192 02:15:13 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:17:59.192 02:15:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:59.192 02:15:13 -- common/autotest_common.sh@10 -- # set +x 00:17:59.192 02:15:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:59.192 02:15:13 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:17:59.192 02:15:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:59.192 02:15:13 -- common/autotest_common.sh@10 -- # set +x 00:17:59.192 02:15:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:59.192 02:15:13 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:17:59.192 02:15:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:59.192 02:15:13 -- common/autotest_common.sh@10 -- # set +x 00:17:59.192 02:15:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:59.192 02:15:13 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:59.192 02:15:13 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:17:59.192 02:15:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:59.192 02:15:13 -- common/autotest_common.sh@10 -- # set +x 00:17:59.192 Malloc4 00:17:59.192 02:15:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:59.192 02:15:13 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:17:59.192 02:15:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:59.192 02:15:13 -- common/autotest_common.sh@10 -- # set +x 00:17:59.192 02:15:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:59.192 02:15:13 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:17:59.192 02:15:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:59.192 02:15:13 -- common/autotest_common.sh@10 -- # set +x 00:17:59.192 02:15:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:59.192 02:15:13 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:17:59.192 02:15:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:59.192 02:15:13 -- common/autotest_common.sh@10 -- # set +x 00:17:59.192 02:15:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:59.192 02:15:13 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:59.192 02:15:13 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:17:59.192 02:15:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:59.192 02:15:13 -- common/autotest_common.sh@10 -- # set +x 00:17:59.192 Malloc5 00:17:59.192 02:15:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:59.192 02:15:13 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:17:59.192 02:15:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:59.192 02:15:13 -- common/autotest_common.sh@10 -- # set +x 00:17:59.192 02:15:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:59.192 02:15:13 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:17:59.192 02:15:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:59.192 02:15:13 -- common/autotest_common.sh@10 -- # set +x 00:17:59.192 02:15:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:59.192 02:15:13 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:17:59.192 02:15:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:59.192 02:15:13 -- common/autotest_common.sh@10 -- # set +x 00:17:59.192 02:15:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:59.192 02:15:13 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:59.192 02:15:13 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:17:59.192 02:15:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:59.192 02:15:13 -- common/autotest_common.sh@10 -- # set +x 00:17:59.192 Malloc6 00:17:59.192 02:15:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:59.192 02:15:13 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:17:59.192 02:15:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:59.192 02:15:13 -- common/autotest_common.sh@10 -- # set +x 00:17:59.192 02:15:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:59.192 02:15:13 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:17:59.192 02:15:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:59.192 02:15:13 -- common/autotest_common.sh@10 -- # set +x 00:17:59.192 02:15:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:59.192 02:15:13 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:17:59.192 02:15:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:59.192 02:15:13 -- common/autotest_common.sh@10 -- # set +x 00:17:59.192 02:15:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:59.192 02:15:13 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:59.192 02:15:13 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:17:59.192 02:15:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:59.192 02:15:13 -- common/autotest_common.sh@10 -- # set +x 00:17:59.192 Malloc7 00:17:59.192 02:15:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:59.192 02:15:13 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:17:59.192 02:15:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:59.192 02:15:13 -- common/autotest_common.sh@10 -- # set +x 00:17:59.192 02:15:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:59.192 02:15:13 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:17:59.192 02:15:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:59.192 02:15:13 -- common/autotest_common.sh@10 -- # set +x 00:17:59.193 02:15:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:59.193 02:15:13 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:17:59.193 02:15:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:59.193 02:15:13 -- common/autotest_common.sh@10 -- # set +x 00:17:59.450 02:15:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:59.450 02:15:13 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:59.450 02:15:13 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:17:59.450 02:15:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:59.450 02:15:13 -- common/autotest_common.sh@10 -- # set +x 00:17:59.450 Malloc8 00:17:59.450 02:15:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:59.450 02:15:13 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:17:59.450 02:15:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:59.450 02:15:13 -- common/autotest_common.sh@10 -- # set +x 00:17:59.450 02:15:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:59.450 02:15:13 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:17:59.450 02:15:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:59.450 02:15:13 -- common/autotest_common.sh@10 -- # set +x 00:17:59.450 02:15:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:59.450 02:15:13 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:17:59.450 02:15:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:59.450 02:15:13 -- common/autotest_common.sh@10 -- # set +x 00:17:59.450 02:15:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:59.450 02:15:13 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:59.450 02:15:13 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:17:59.450 02:15:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:59.450 02:15:13 -- common/autotest_common.sh@10 -- # set +x 00:17:59.450 Malloc9 00:17:59.450 02:15:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:59.450 02:15:13 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:17:59.450 02:15:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:59.450 02:15:13 -- common/autotest_common.sh@10 -- # set +x 00:17:59.450 02:15:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:59.450 02:15:13 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:17:59.450 02:15:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:59.450 02:15:13 -- common/autotest_common.sh@10 -- # set +x 00:17:59.450 02:15:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:59.450 02:15:13 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:17:59.450 02:15:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:59.450 02:15:13 -- common/autotest_common.sh@10 -- # set +x 00:17:59.450 02:15:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:59.450 02:15:13 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:59.450 02:15:13 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:17:59.450 02:15:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:59.450 02:15:13 -- common/autotest_common.sh@10 -- # set +x 00:17:59.450 Malloc10 00:17:59.450 02:15:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:59.450 02:15:13 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:17:59.450 02:15:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:59.450 02:15:13 -- common/autotest_common.sh@10 -- # set +x 00:17:59.450 02:15:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:59.450 02:15:13 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:17:59.450 02:15:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:59.450 02:15:13 -- common/autotest_common.sh@10 -- # set +x 00:17:59.450 02:15:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:59.450 02:15:13 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:17:59.450 02:15:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:59.450 02:15:13 -- common/autotest_common.sh@10 -- # set +x 00:17:59.450 02:15:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:59.450 02:15:13 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:59.450 02:15:13 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:17:59.450 02:15:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:59.450 02:15:13 -- common/autotest_common.sh@10 -- # set +x 00:17:59.450 Malloc11 00:17:59.450 02:15:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:59.450 02:15:13 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:17:59.450 02:15:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:59.450 02:15:13 -- common/autotest_common.sh@10 -- # set +x 00:17:59.450 02:15:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:59.450 02:15:13 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:17:59.450 02:15:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:59.450 02:15:13 -- common/autotest_common.sh@10 -- # set +x 00:17:59.450 02:15:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:59.450 02:15:13 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:17:59.450 02:15:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:59.450 02:15:13 -- common/autotest_common.sh@10 -- # set +x 00:17:59.450 02:15:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:59.450 02:15:13 -- target/multiconnection.sh@28 -- # seq 1 11 00:17:59.450 02:15:13 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:59.450 02:15:13 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 --hostid=01bebc16-ee64-4b1b-82ac-462e1640a9a9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:59.708 02:15:14 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:17:59.708 02:15:14 -- common/autotest_common.sh@1177 -- # local i=0 00:17:59.708 02:15:14 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:17:59.708 02:15:14 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:17:59.708 02:15:14 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:01.638 02:15:16 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:01.638 02:15:16 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:01.638 02:15:16 -- common/autotest_common.sh@1186 -- # grep -c SPDK1 00:18:01.638 02:15:16 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:01.638 02:15:16 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:01.638 02:15:16 -- common/autotest_common.sh@1187 -- # return 0 00:18:01.638 02:15:16 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:01.638 02:15:16 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 --hostid=01bebc16-ee64-4b1b-82ac-462e1640a9a9 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:18:01.897 02:15:16 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:18:01.898 02:15:16 -- common/autotest_common.sh@1177 -- # local i=0 00:18:01.898 02:15:16 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:01.898 02:15:16 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:01.898 02:15:16 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:03.803 02:15:18 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:03.803 02:15:18 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:03.803 02:15:18 -- common/autotest_common.sh@1186 -- # grep -c SPDK2 00:18:03.803 02:15:18 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:03.803 02:15:18 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:03.803 02:15:18 -- common/autotest_common.sh@1187 -- # return 0 00:18:03.803 02:15:18 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:03.803 02:15:18 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 --hostid=01bebc16-ee64-4b1b-82ac-462e1640a9a9 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:18:04.074 02:15:18 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:18:04.074 02:15:18 -- common/autotest_common.sh@1177 -- # local i=0 00:18:04.074 02:15:18 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:04.074 02:15:18 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:04.074 02:15:18 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:05.974 02:15:20 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:05.974 02:15:20 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:05.974 02:15:20 -- common/autotest_common.sh@1186 -- # grep -c SPDK3 00:18:05.974 02:15:20 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:05.974 02:15:20 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:05.974 02:15:20 -- common/autotest_common.sh@1187 -- # return 0 00:18:05.974 02:15:20 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:05.974 02:15:20 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 --hostid=01bebc16-ee64-4b1b-82ac-462e1640a9a9 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:18:06.232 02:15:20 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:18:06.232 02:15:20 -- common/autotest_common.sh@1177 -- # local i=0 00:18:06.232 02:15:20 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:06.232 02:15:20 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:06.232 02:15:20 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:08.131 02:15:22 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:08.131 02:15:22 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:08.131 02:15:22 -- common/autotest_common.sh@1186 -- # grep -c SPDK4 00:18:08.131 02:15:22 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:08.131 02:15:22 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:08.131 02:15:22 -- common/autotest_common.sh@1187 -- # return 0 00:18:08.131 02:15:22 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:08.131 02:15:22 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 --hostid=01bebc16-ee64-4b1b-82ac-462e1640a9a9 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:18:08.390 02:15:22 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:18:08.390 02:15:22 -- common/autotest_common.sh@1177 -- # local i=0 00:18:08.390 02:15:22 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:08.390 02:15:22 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:08.390 02:15:22 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:10.921 02:15:24 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:10.921 02:15:24 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:10.921 02:15:24 -- common/autotest_common.sh@1186 -- # grep -c SPDK5 00:18:10.921 02:15:24 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:10.921 02:15:24 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:10.921 02:15:24 -- common/autotest_common.sh@1187 -- # return 0 00:18:10.921 02:15:24 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:10.921 02:15:24 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 --hostid=01bebc16-ee64-4b1b-82ac-462e1640a9a9 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:18:10.921 02:15:25 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:18:10.921 02:15:25 -- common/autotest_common.sh@1177 -- # local i=0 00:18:10.921 02:15:25 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:10.921 02:15:25 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:10.921 02:15:25 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:12.823 02:15:27 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:12.823 02:15:27 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:12.823 02:15:27 -- common/autotest_common.sh@1186 -- # grep -c SPDK6 00:18:12.823 02:15:27 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:12.823 02:15:27 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:12.823 02:15:27 -- common/autotest_common.sh@1187 -- # return 0 00:18:12.823 02:15:27 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:12.823 02:15:27 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 --hostid=01bebc16-ee64-4b1b-82ac-462e1640a9a9 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:18:12.823 02:15:27 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:18:12.823 02:15:27 -- common/autotest_common.sh@1177 -- # local i=0 00:18:12.823 02:15:27 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:12.823 02:15:27 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:12.823 02:15:27 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:14.725 02:15:29 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:14.725 02:15:29 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:14.725 02:15:29 -- common/autotest_common.sh@1186 -- # grep -c SPDK7 00:18:14.725 02:15:29 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:14.725 02:15:29 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:14.725 02:15:29 -- common/autotest_common.sh@1187 -- # return 0 00:18:14.725 02:15:29 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:14.725 02:15:29 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 --hostid=01bebc16-ee64-4b1b-82ac-462e1640a9a9 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:18:14.983 02:15:29 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:18:14.983 02:15:29 -- common/autotest_common.sh@1177 -- # local i=0 00:18:14.983 02:15:29 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:14.983 02:15:29 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:14.983 02:15:29 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:17.514 02:15:31 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:17.514 02:15:31 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:17.514 02:15:31 -- common/autotest_common.sh@1186 -- # grep -c SPDK8 00:18:17.514 02:15:31 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:17.514 02:15:31 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:17.514 02:15:31 -- common/autotest_common.sh@1187 -- # return 0 00:18:17.514 02:15:31 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:17.514 02:15:31 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 --hostid=01bebc16-ee64-4b1b-82ac-462e1640a9a9 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:18:17.514 02:15:31 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:18:17.514 02:15:31 -- common/autotest_common.sh@1177 -- # local i=0 00:18:17.514 02:15:31 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:17.514 02:15:31 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:17.514 02:15:31 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:19.415 02:15:33 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:19.415 02:15:33 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:19.415 02:15:33 -- common/autotest_common.sh@1186 -- # grep -c SPDK9 00:18:19.415 02:15:33 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:19.415 02:15:33 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:19.415 02:15:33 -- common/autotest_common.sh@1187 -- # return 0 00:18:19.415 02:15:33 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:19.415 02:15:33 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 --hostid=01bebc16-ee64-4b1b-82ac-462e1640a9a9 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:18:19.415 02:15:33 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:18:19.415 02:15:33 -- common/autotest_common.sh@1177 -- # local i=0 00:18:19.415 02:15:33 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:19.415 02:15:33 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:19.415 02:15:33 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:21.317 02:15:35 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:21.318 02:15:35 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:21.318 02:15:35 -- common/autotest_common.sh@1186 -- # grep -c SPDK10 00:18:21.318 02:15:35 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:21.318 02:15:35 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:21.318 02:15:35 -- common/autotest_common.sh@1187 -- # return 0 00:18:21.318 02:15:35 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:21.318 02:15:35 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 --hostid=01bebc16-ee64-4b1b-82ac-462e1640a9a9 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:18:21.576 02:15:36 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:18:21.576 02:15:36 -- common/autotest_common.sh@1177 -- # local i=0 00:18:21.576 02:15:36 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:21.576 02:15:36 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:21.576 02:15:36 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:23.496 02:15:38 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:23.496 02:15:38 -- common/autotest_common.sh@1186 -- # grep -c SPDK11 00:18:23.496 02:15:38 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:23.755 02:15:38 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:23.755 02:15:38 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:23.755 02:15:38 -- common/autotest_common.sh@1187 -- # return 0 00:18:23.755 02:15:38 -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:18:23.755 [global] 00:18:23.755 thread=1 00:18:23.755 invalidate=1 00:18:23.755 rw=read 00:18:23.755 time_based=1 00:18:23.755 runtime=10 00:18:23.755 ioengine=libaio 00:18:23.755 direct=1 00:18:23.755 bs=262144 00:18:23.755 iodepth=64 00:18:23.755 norandommap=1 00:18:23.755 numjobs=1 00:18:23.755 00:18:23.755 [job0] 00:18:23.755 filename=/dev/nvme0n1 00:18:23.755 [job1] 00:18:23.755 filename=/dev/nvme10n1 00:18:23.755 [job2] 00:18:23.755 filename=/dev/nvme1n1 00:18:23.755 [job3] 00:18:23.755 filename=/dev/nvme2n1 00:18:23.755 [job4] 00:18:23.755 filename=/dev/nvme3n1 00:18:23.755 [job5] 00:18:23.755 filename=/dev/nvme4n1 00:18:23.755 [job6] 00:18:23.755 filename=/dev/nvme5n1 00:18:23.755 [job7] 00:18:23.755 filename=/dev/nvme6n1 00:18:23.755 [job8] 00:18:23.755 filename=/dev/nvme7n1 00:18:23.755 [job9] 00:18:23.755 filename=/dev/nvme8n1 00:18:23.755 [job10] 00:18:23.755 filename=/dev/nvme9n1 00:18:23.755 Could not set queue depth (nvme0n1) 00:18:23.755 Could not set queue depth (nvme10n1) 00:18:23.755 Could not set queue depth (nvme1n1) 00:18:23.755 Could not set queue depth (nvme2n1) 00:18:23.755 Could not set queue depth (nvme3n1) 00:18:23.755 Could not set queue depth (nvme4n1) 00:18:23.755 Could not set queue depth (nvme5n1) 00:18:23.755 Could not set queue depth (nvme6n1) 00:18:23.755 Could not set queue depth (nvme7n1) 00:18:23.755 Could not set queue depth (nvme8n1) 00:18:23.755 Could not set queue depth (nvme9n1) 00:18:24.014 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:24.014 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:24.014 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:24.014 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:24.014 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:24.014 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:24.014 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:24.014 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:24.014 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:24.014 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:24.014 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:24.014 fio-3.35 00:18:24.014 Starting 11 threads 00:18:36.216 00:18:36.216 job0: (groupid=0, jobs=1): err= 0: pid=78537: Tue May 14 02:15:48 2024 00:18:36.216 read: IOPS=726, BW=182MiB/s (190MB/s)(1829MiB/10069msec) 00:18:36.216 slat (usec): min=14, max=52540, avg=1353.52, stdev=4554.78 00:18:36.216 clat (msec): min=18, max=157, avg=86.60, stdev=10.71 00:18:36.216 lat (msec): min=20, max=157, avg=87.95, stdev=11.42 00:18:36.216 clat percentiles (msec): 00:18:36.216 | 1.00th=[ 54], 5.00th=[ 72], 10.00th=[ 75], 20.00th=[ 81], 00:18:36.216 | 30.00th=[ 84], 40.00th=[ 86], 50.00th=[ 87], 60.00th=[ 89], 00:18:36.216 | 70.00th=[ 91], 80.00th=[ 94], 90.00th=[ 97], 95.00th=[ 102], 00:18:36.216 | 99.00th=[ 111], 99.50th=[ 123], 99.90th=[ 157], 99.95th=[ 159], 00:18:36.216 | 99.99th=[ 159] 00:18:36.216 bw ( KiB/s): min=171520, max=220160, per=8.53%, avg=185578.25, stdev=9569.63, samples=20 00:18:36.216 iops : min= 670, max= 860, avg=724.65, stdev=37.43, samples=20 00:18:36.216 lat (msec) : 20=0.01%, 50=0.75%, 100=93.07%, 250=6.17% 00:18:36.216 cpu : usr=0.26%, sys=2.33%, ctx=2106, majf=0, minf=4097 00:18:36.216 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:18:36.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:36.216 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:36.216 issued rwts: total=7315,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:36.216 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:36.216 job1: (groupid=0, jobs=1): err= 0: pid=78538: Tue May 14 02:15:48 2024 00:18:36.216 read: IOPS=709, BW=177MiB/s (186MB/s)(1787MiB/10069msec) 00:18:36.216 slat (usec): min=14, max=44547, avg=1337.08, stdev=4435.59 00:18:36.216 clat (msec): min=14, max=159, avg=88.66, stdev=13.87 00:18:36.216 lat (msec): min=15, max=159, avg=89.99, stdev=14.43 00:18:36.216 clat percentiles (msec): 00:18:36.216 | 1.00th=[ 49], 5.00th=[ 62], 10.00th=[ 74], 20.00th=[ 82], 00:18:36.216 | 30.00th=[ 86], 40.00th=[ 88], 50.00th=[ 90], 60.00th=[ 92], 00:18:36.216 | 70.00th=[ 94], 80.00th=[ 97], 90.00th=[ 102], 95.00th=[ 106], 00:18:36.216 | 99.00th=[ 134], 99.50th=[ 146], 99.90th=[ 150], 99.95th=[ 150], 00:18:36.216 | 99.99th=[ 161] 00:18:36.216 bw ( KiB/s): min=163328, max=244224, per=8.34%, avg=181279.05, stdev=15943.43, samples=20 00:18:36.216 iops : min= 638, max= 954, avg=707.85, stdev=62.32, samples=20 00:18:36.216 lat (msec) : 20=0.13%, 50=1.18%, 100=86.16%, 250=12.54% 00:18:36.216 cpu : usr=0.30%, sys=2.43%, ctx=1842, majf=0, minf=4097 00:18:36.216 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:18:36.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:36.216 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:36.216 issued rwts: total=7147,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:36.216 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:36.216 job2: (groupid=0, jobs=1): err= 0: pid=78539: Tue May 14 02:15:48 2024 00:18:36.216 read: IOPS=542, BW=136MiB/s (142MB/s)(1368MiB/10083msec) 00:18:36.216 slat (usec): min=17, max=64843, avg=1809.74, stdev=6109.62 00:18:36.216 clat (msec): min=21, max=186, avg=116.01, stdev=12.54 00:18:36.216 lat (msec): min=21, max=186, avg=117.81, stdev=13.79 00:18:36.216 clat percentiles (msec): 00:18:36.216 | 1.00th=[ 80], 5.00th=[ 94], 10.00th=[ 104], 20.00th=[ 110], 00:18:36.216 | 30.00th=[ 112], 40.00th=[ 114], 50.00th=[ 116], 60.00th=[ 118], 00:18:36.216 | 70.00th=[ 122], 80.00th=[ 125], 90.00th=[ 129], 95.00th=[ 134], 00:18:36.216 | 99.00th=[ 148], 99.50th=[ 167], 99.90th=[ 186], 99.95th=[ 188], 00:18:36.216 | 99.99th=[ 188] 00:18:36.216 bw ( KiB/s): min=119296, max=161090, per=6.37%, avg=138529.75, stdev=8514.93, samples=20 00:18:36.216 iops : min= 466, max= 629, avg=541.00, stdev=33.20, samples=20 00:18:36.216 lat (msec) : 50=0.11%, 100=7.37%, 250=92.52% 00:18:36.216 cpu : usr=0.24%, sys=1.78%, ctx=1443, majf=0, minf=4097 00:18:36.216 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:18:36.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:36.216 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:36.216 issued rwts: total=5471,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:36.216 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:36.216 job3: (groupid=0, jobs=1): err= 0: pid=78540: Tue May 14 02:15:48 2024 00:18:36.216 read: IOPS=1057, BW=264MiB/s (277MB/s)(2659MiB/10053msec) 00:18:36.216 slat (usec): min=16, max=47136, avg=935.41, stdev=3284.70 00:18:36.216 clat (msec): min=15, max=129, avg=59.44, stdev= 9.50 00:18:36.216 lat (msec): min=15, max=145, avg=60.38, stdev= 9.86 00:18:36.216 clat percentiles (msec): 00:18:36.216 | 1.00th=[ 41], 5.00th=[ 47], 10.00th=[ 50], 20.00th=[ 53], 00:18:36.216 | 30.00th=[ 55], 40.00th=[ 57], 50.00th=[ 59], 60.00th=[ 61], 00:18:36.216 | 70.00th=[ 63], 80.00th=[ 65], 90.00th=[ 70], 95.00th=[ 75], 00:18:36.216 | 99.00th=[ 97], 99.50th=[ 105], 99.90th=[ 112], 99.95th=[ 121], 00:18:36.216 | 99.99th=[ 130] 00:18:36.216 bw ( KiB/s): min=182784, max=291911, per=12.44%, avg=270560.40, stdev=23971.30, samples=20 00:18:36.216 iops : min= 714, max= 1140, avg=1056.55, stdev=93.59, samples=20 00:18:36.216 lat (msec) : 20=0.13%, 50=10.31%, 100=89.03%, 250=0.53% 00:18:36.216 cpu : usr=0.37%, sys=3.25%, ctx=2953, majf=0, minf=4097 00:18:36.216 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:18:36.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:36.216 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:36.216 issued rwts: total=10636,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:36.217 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:36.217 job4: (groupid=0, jobs=1): err= 0: pid=78541: Tue May 14 02:15:48 2024 00:18:36.217 read: IOPS=1062, BW=266MiB/s (278MB/s)(2669MiB/10052msec) 00:18:36.217 slat (usec): min=17, max=49202, avg=926.72, stdev=3233.59 00:18:36.217 clat (msec): min=21, max=113, avg=59.20, stdev= 8.73 00:18:36.217 lat (msec): min=22, max=138, avg=60.12, stdev= 9.11 00:18:36.217 clat percentiles (msec): 00:18:36.217 | 1.00th=[ 42], 5.00th=[ 48], 10.00th=[ 51], 20.00th=[ 53], 00:18:36.217 | 30.00th=[ 55], 40.00th=[ 57], 50.00th=[ 59], 60.00th=[ 61], 00:18:36.217 | 70.00th=[ 63], 80.00th=[ 65], 90.00th=[ 69], 95.00th=[ 73], 00:18:36.217 | 99.00th=[ 94], 99.50th=[ 100], 99.90th=[ 108], 99.95th=[ 114], 00:18:36.217 | 99.99th=[ 114] 00:18:36.217 bw ( KiB/s): min=179712, max=290304, per=12.49%, avg=271579.95, stdev=23733.47, samples=20 00:18:36.217 iops : min= 702, max= 1134, avg=1060.55, stdev=92.64, samples=20 00:18:36.217 lat (msec) : 50=9.25%, 100=90.39%, 250=0.37% 00:18:36.217 cpu : usr=0.40%, sys=3.34%, ctx=2826, majf=0, minf=4097 00:18:36.217 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:18:36.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:36.217 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:36.217 issued rwts: total=10676,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:36.217 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:36.217 job5: (groupid=0, jobs=1): err= 0: pid=78542: Tue May 14 02:15:48 2024 00:18:36.217 read: IOPS=1050, BW=263MiB/s (275MB/s)(2638MiB/10043msec) 00:18:36.217 slat (usec): min=14, max=37344, avg=937.13, stdev=3179.53 00:18:36.217 clat (msec): min=30, max=111, avg=59.89, stdev= 8.35 00:18:36.217 lat (msec): min=30, max=120, avg=60.83, stdev= 8.75 00:18:36.217 clat percentiles (msec): 00:18:36.217 | 1.00th=[ 42], 5.00th=[ 48], 10.00th=[ 52], 20.00th=[ 54], 00:18:36.217 | 30.00th=[ 56], 40.00th=[ 58], 50.00th=[ 60], 60.00th=[ 62], 00:18:36.217 | 70.00th=[ 64], 80.00th=[ 66], 90.00th=[ 69], 95.00th=[ 73], 00:18:36.217 | 99.00th=[ 92], 99.50th=[ 95], 99.90th=[ 105], 99.95th=[ 107], 00:18:36.217 | 99.99th=[ 112] 00:18:36.217 bw ( KiB/s): min=188928, max=289859, per=12.35%, avg=268630.50, stdev=19980.94, samples=20 00:18:36.217 iops : min= 738, max= 1132, avg=1049.15, stdev=77.99, samples=20 00:18:36.217 lat (msec) : 50=8.00%, 100=91.73%, 250=0.27% 00:18:36.217 cpu : usr=0.40%, sys=3.63%, ctx=2548, majf=0, minf=4097 00:18:36.217 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:18:36.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:36.217 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:36.217 issued rwts: total=10552,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:36.217 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:36.217 job6: (groupid=0, jobs=1): err= 0: pid=78543: Tue May 14 02:15:48 2024 00:18:36.217 read: IOPS=536, BW=134MiB/s (141MB/s)(1354MiB/10095msec) 00:18:36.217 slat (usec): min=17, max=66440, avg=1842.76, stdev=5988.01 00:18:36.217 clat (msec): min=18, max=199, avg=117.25, stdev=14.66 00:18:36.217 lat (msec): min=18, max=199, avg=119.09, stdev=15.75 00:18:36.217 clat percentiles (msec): 00:18:36.217 | 1.00th=[ 61], 5.00th=[ 95], 10.00th=[ 105], 20.00th=[ 110], 00:18:36.217 | 30.00th=[ 113], 40.00th=[ 115], 50.00th=[ 117], 60.00th=[ 121], 00:18:36.217 | 70.00th=[ 123], 80.00th=[ 127], 90.00th=[ 131], 95.00th=[ 136], 00:18:36.217 | 99.00th=[ 157], 99.50th=[ 178], 99.90th=[ 199], 99.95th=[ 201], 00:18:36.217 | 99.99th=[ 201] 00:18:36.217 bw ( KiB/s): min=123904, max=167759, per=6.30%, avg=136949.25, stdev=9489.84, samples=20 00:18:36.217 iops : min= 484, max= 655, avg=534.65, stdev=36.95, samples=20 00:18:36.217 lat (msec) : 20=0.06%, 50=0.35%, 100=6.52%, 250=93.07% 00:18:36.217 cpu : usr=0.20%, sys=1.78%, ctx=1390, majf=0, minf=4097 00:18:36.217 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:18:36.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:36.217 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:36.217 issued rwts: total=5414,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:36.217 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:36.217 job7: (groupid=0, jobs=1): err= 0: pid=78544: Tue May 14 02:15:48 2024 00:18:36.217 read: IOPS=532, BW=133MiB/s (140MB/s)(1343MiB/10086msec) 00:18:36.217 slat (usec): min=16, max=68261, avg=1858.03, stdev=6304.36 00:18:36.217 clat (msec): min=21, max=191, avg=118.12, stdev=14.52 00:18:36.217 lat (msec): min=21, max=198, avg=119.98, stdev=15.82 00:18:36.217 clat percentiles (msec): 00:18:36.217 | 1.00th=[ 78], 5.00th=[ 93], 10.00th=[ 106], 20.00th=[ 111], 00:18:36.217 | 30.00th=[ 114], 40.00th=[ 117], 50.00th=[ 120], 60.00th=[ 122], 00:18:36.217 | 70.00th=[ 125], 80.00th=[ 127], 90.00th=[ 131], 95.00th=[ 138], 00:18:36.217 | 99.00th=[ 157], 99.50th=[ 163], 99.90th=[ 192], 99.95th=[ 192], 00:18:36.217 | 99.99th=[ 192] 00:18:36.217 bw ( KiB/s): min=124416, max=175454, per=6.25%, avg=135995.55, stdev=11036.42, samples=20 00:18:36.217 iops : min= 486, max= 685, avg=531.10, stdev=43.01, samples=20 00:18:36.217 lat (msec) : 50=0.69%, 100=5.99%, 250=93.32% 00:18:36.217 cpu : usr=0.19%, sys=1.87%, ctx=1360, majf=0, minf=4097 00:18:36.217 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:18:36.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:36.217 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:36.217 issued rwts: total=5373,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:36.217 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:36.217 job8: (groupid=0, jobs=1): err= 0: pid=78545: Tue May 14 02:15:48 2024 00:18:36.217 read: IOPS=529, BW=132MiB/s (139MB/s)(1335MiB/10094msec) 00:18:36.217 slat (usec): min=17, max=89985, avg=1839.73, stdev=6066.10 00:18:36.217 clat (msec): min=13, max=212, avg=118.89, stdev=14.54 00:18:36.217 lat (msec): min=13, max=212, avg=120.73, stdev=15.78 00:18:36.217 clat percentiles (msec): 00:18:36.217 | 1.00th=[ 78], 5.00th=[ 97], 10.00th=[ 105], 20.00th=[ 112], 00:18:36.217 | 30.00th=[ 115], 40.00th=[ 118], 50.00th=[ 121], 60.00th=[ 123], 00:18:36.217 | 70.00th=[ 125], 80.00th=[ 128], 90.00th=[ 133], 95.00th=[ 138], 00:18:36.217 | 99.00th=[ 153], 99.50th=[ 167], 99.90th=[ 203], 99.95th=[ 213], 00:18:36.217 | 99.99th=[ 213] 00:18:36.217 bw ( KiB/s): min=118272, max=161469, per=6.21%, avg=135090.50, stdev=8674.73, samples=20 00:18:36.217 iops : min= 462, max= 630, avg=527.45, stdev=33.79, samples=20 00:18:36.217 lat (msec) : 20=0.07%, 50=0.56%, 100=5.84%, 250=93.52% 00:18:36.217 cpu : usr=0.28%, sys=1.62%, ctx=1549, majf=0, minf=4097 00:18:36.217 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:18:36.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:36.217 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:36.217 issued rwts: total=5341,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:36.217 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:36.217 job9: (groupid=0, jobs=1): err= 0: pid=78546: Tue May 14 02:15:48 2024 00:18:36.217 read: IOPS=1049, BW=262MiB/s (275MB/s)(2635MiB/10042msec) 00:18:36.217 slat (usec): min=16, max=54027, avg=943.13, stdev=3246.60 00:18:36.217 clat (msec): min=19, max=120, avg=59.95, stdev= 9.58 00:18:36.217 lat (msec): min=19, max=128, avg=60.89, stdev= 9.98 00:18:36.217 clat percentiles (msec): 00:18:36.217 | 1.00th=[ 41], 5.00th=[ 47], 10.00th=[ 51], 20.00th=[ 54], 00:18:36.217 | 30.00th=[ 56], 40.00th=[ 58], 50.00th=[ 59], 60.00th=[ 62], 00:18:36.217 | 70.00th=[ 64], 80.00th=[ 66], 90.00th=[ 70], 95.00th=[ 78], 00:18:36.217 | 99.00th=[ 94], 99.50th=[ 100], 99.90th=[ 108], 99.95th=[ 111], 00:18:36.217 | 99.99th=[ 114] 00:18:36.217 bw ( KiB/s): min=187904, max=293376, per=12.34%, avg=268431.55, stdev=23623.43, samples=20 00:18:36.217 iops : min= 734, max= 1146, avg=1048.40, stdev=92.33, samples=20 00:18:36.217 lat (msec) : 20=0.02%, 50=9.74%, 100=89.76%, 250=0.47% 00:18:36.217 cpu : usr=0.41%, sys=3.56%, ctx=2596, majf=0, minf=4097 00:18:36.217 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:18:36.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:36.217 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:36.217 issued rwts: total=10541,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:36.217 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:36.217 job10: (groupid=0, jobs=1): err= 0: pid=78547: Tue May 14 02:15:48 2024 00:18:36.217 read: IOPS=722, BW=181MiB/s (190MB/s)(1820MiB/10069msec) 00:18:36.217 slat (usec): min=17, max=42311, avg=1370.18, stdev=4444.28 00:18:36.217 clat (msec): min=11, max=143, avg=87.01, stdev=12.36 00:18:36.217 lat (msec): min=11, max=143, avg=88.38, stdev=13.04 00:18:36.217 clat percentiles (msec): 00:18:36.217 | 1.00th=[ 50], 5.00th=[ 64], 10.00th=[ 73], 20.00th=[ 81], 00:18:36.217 | 30.00th=[ 84], 40.00th=[ 87], 50.00th=[ 89], 60.00th=[ 91], 00:18:36.217 | 70.00th=[ 93], 80.00th=[ 96], 90.00th=[ 101], 95.00th=[ 103], 00:18:36.217 | 99.00th=[ 110], 99.50th=[ 116], 99.90th=[ 144], 99.95th=[ 144], 00:18:36.217 | 99.99th=[ 144] 00:18:36.217 bw ( KiB/s): min=170666, max=250880, per=8.49%, avg=184659.90, stdev=18637.63, samples=20 00:18:36.217 iops : min= 666, max= 980, avg=721.05, stdev=72.90, samples=20 00:18:36.217 lat (msec) : 20=0.10%, 50=0.99%, 100=89.75%, 250=9.16% 00:18:36.217 cpu : usr=0.32%, sys=2.81%, ctx=1124, majf=0, minf=4097 00:18:36.217 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:18:36.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:36.217 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:36.217 issued rwts: total=7279,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:36.217 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:36.217 00:18:36.217 Run status group 0 (all jobs): 00:18:36.217 READ: bw=2123MiB/s (2227MB/s), 132MiB/s-266MiB/s (139MB/s-278MB/s), io=20.9GiB (22.5GB), run=10042-10095msec 00:18:36.217 00:18:36.217 Disk stats (read/write): 00:18:36.217 nvme0n1: ios=14526/0, merge=0/0, ticks=1240133/0, in_queue=1240133, util=97.73% 00:18:36.217 nvme10n1: ios=14175/0, merge=0/0, ticks=1242469/0, in_queue=1242469, util=97.90% 00:18:36.217 nvme1n1: ios=10815/0, merge=0/0, ticks=1241485/0, in_queue=1241485, util=97.95% 00:18:36.217 nvme2n1: ios=21169/0, merge=0/0, ticks=1238094/0, in_queue=1238094, util=98.12% 00:18:36.217 nvme3n1: ios=21277/0, merge=0/0, ticks=1238883/0, in_queue=1238883, util=98.13% 00:18:36.217 nvme4n1: ios=20977/0, merge=0/0, ticks=1238276/0, in_queue=1238276, util=98.32% 00:18:36.217 nvme5n1: ios=10714/0, merge=0/0, ticks=1239627/0, in_queue=1239627, util=98.50% 00:18:36.217 nvme6n1: ios=10618/0, merge=0/0, ticks=1239818/0, in_queue=1239818, util=98.54% 00:18:36.217 nvme7n1: ios=10575/0, merge=0/0, ticks=1242768/0, in_queue=1242768, util=98.77% 00:18:36.218 nvme8n1: ios=20976/0, merge=0/0, ticks=1238624/0, in_queue=1238624, util=98.78% 00:18:36.218 nvme9n1: ios=14466/0, merge=0/0, ticks=1244904/0, in_queue=1244904, util=99.11% 00:18:36.218 02:15:48 -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:18:36.218 [global] 00:18:36.218 thread=1 00:18:36.218 invalidate=1 00:18:36.218 rw=randwrite 00:18:36.218 time_based=1 00:18:36.218 runtime=10 00:18:36.218 ioengine=libaio 00:18:36.218 direct=1 00:18:36.218 bs=262144 00:18:36.218 iodepth=64 00:18:36.218 norandommap=1 00:18:36.218 numjobs=1 00:18:36.218 00:18:36.218 [job0] 00:18:36.218 filename=/dev/nvme0n1 00:18:36.218 [job1] 00:18:36.218 filename=/dev/nvme10n1 00:18:36.218 [job2] 00:18:36.218 filename=/dev/nvme1n1 00:18:36.218 [job3] 00:18:36.218 filename=/dev/nvme2n1 00:18:36.218 [job4] 00:18:36.218 filename=/dev/nvme3n1 00:18:36.218 [job5] 00:18:36.218 filename=/dev/nvme4n1 00:18:36.218 [job6] 00:18:36.218 filename=/dev/nvme5n1 00:18:36.218 [job7] 00:18:36.218 filename=/dev/nvme6n1 00:18:36.218 [job8] 00:18:36.218 filename=/dev/nvme7n1 00:18:36.218 [job9] 00:18:36.218 filename=/dev/nvme8n1 00:18:36.218 [job10] 00:18:36.218 filename=/dev/nvme9n1 00:18:36.218 Could not set queue depth (nvme0n1) 00:18:36.218 Could not set queue depth (nvme10n1) 00:18:36.218 Could not set queue depth (nvme1n1) 00:18:36.218 Could not set queue depth (nvme2n1) 00:18:36.218 Could not set queue depth (nvme3n1) 00:18:36.218 Could not set queue depth (nvme4n1) 00:18:36.218 Could not set queue depth (nvme5n1) 00:18:36.218 Could not set queue depth (nvme6n1) 00:18:36.218 Could not set queue depth (nvme7n1) 00:18:36.218 Could not set queue depth (nvme8n1) 00:18:36.218 Could not set queue depth (nvme9n1) 00:18:36.218 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:36.218 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:36.218 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:36.218 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:36.218 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:36.218 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:36.218 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:36.218 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:36.218 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:36.218 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:36.218 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:36.218 fio-3.35 00:18:36.218 Starting 11 threads 00:18:46.198 00:18:46.198 job0: (groupid=0, jobs=1): err= 0: pid=78744: Tue May 14 02:15:59 2024 00:18:46.198 write: IOPS=859, BW=215MiB/s (225MB/s)(2162MiB/10065msec); 0 zone resets 00:18:46.198 slat (usec): min=16, max=11686, avg=1151.25, stdev=1937.46 00:18:46.198 clat (msec): min=13, max=135, avg=73.30, stdev= 4.82 00:18:46.198 lat (msec): min=13, max=135, avg=74.45, stdev= 4.56 00:18:46.198 clat percentiles (msec): 00:18:46.198 | 1.00th=[ 66], 5.00th=[ 70], 10.00th=[ 70], 20.00th=[ 71], 00:18:46.198 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 74], 60.00th=[ 75], 00:18:46.198 | 70.00th=[ 75], 80.00th=[ 75], 90.00th=[ 77], 95.00th=[ 77], 00:18:46.198 | 99.00th=[ 78], 99.50th=[ 84], 99.90th=[ 127], 99.95th=[ 132], 00:18:46.198 | 99.99th=[ 136] 00:18:46.198 bw ( KiB/s): min=216064, max=222208, per=12.82%, avg=219823.70, stdev=1485.05, samples=20 00:18:46.198 iops : min= 844, max= 868, avg=858.65, stdev= 5.77, samples=20 00:18:46.198 lat (msec) : 20=0.09%, 50=0.44%, 100=99.12%, 250=0.35% 00:18:46.198 cpu : usr=1.38%, sys=2.25%, ctx=10133, majf=0, minf=1 00:18:46.198 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:18:46.198 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:46.198 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:46.198 issued rwts: total=0,8649,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:46.198 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:46.198 job1: (groupid=0, jobs=1): err= 0: pid=78745: Tue May 14 02:15:59 2024 00:18:46.198 write: IOPS=356, BW=89.1MiB/s (93.5MB/s)(909MiB/10194msec); 0 zone resets 00:18:46.198 slat (usec): min=19, max=24914, avg=2748.44, stdev=4810.39 00:18:46.198 clat (msec): min=2, max=406, avg=176.64, stdev=26.96 00:18:46.198 lat (msec): min=2, max=406, avg=179.39, stdev=26.88 00:18:46.198 clat percentiles (msec): 00:18:46.198 | 1.00th=[ 74], 5.00th=[ 140], 10.00th=[ 163], 20.00th=[ 169], 00:18:46.198 | 30.00th=[ 174], 40.00th=[ 176], 50.00th=[ 178], 60.00th=[ 180], 00:18:46.198 | 70.00th=[ 184], 80.00th=[ 186], 90.00th=[ 192], 95.00th=[ 197], 00:18:46.198 | 99.00th=[ 275], 99.50th=[ 334], 99.90th=[ 393], 99.95th=[ 409], 00:18:46.198 | 99.99th=[ 409] 00:18:46.198 bw ( KiB/s): min=79872, max=118035, per=5.33%, avg=91405.75, stdev=7108.66, samples=20 00:18:46.198 iops : min= 312, max= 461, avg=357.05, stdev=27.75, samples=20 00:18:46.198 lat (msec) : 4=0.08%, 20=0.11%, 50=0.33%, 100=0.88%, 250=97.44% 00:18:46.198 lat (msec) : 500=1.16% 00:18:46.198 cpu : usr=0.61%, sys=1.06%, ctx=3402, majf=0, minf=1 00:18:46.198 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:18:46.198 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:46.198 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:46.198 issued rwts: total=0,3635,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:46.198 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:46.198 job2: (groupid=0, jobs=1): err= 0: pid=78757: Tue May 14 02:15:59 2024 00:18:46.198 write: IOPS=1495, BW=374MiB/s (392MB/s)(3753MiB/10039msec); 0 zone resets 00:18:46.198 slat (usec): min=14, max=13028, avg=653.77, stdev=1147.91 00:18:46.198 clat (msec): min=8, max=139, avg=42.13, stdev=11.70 00:18:46.198 lat (msec): min=9, max=141, avg=42.79, stdev=11.86 00:18:46.198 clat percentiles (msec): 00:18:46.198 | 1.00th=[ 31], 5.00th=[ 39], 10.00th=[ 39], 20.00th=[ 40], 00:18:46.198 | 30.00th=[ 40], 40.00th=[ 41], 50.00th=[ 41], 60.00th=[ 42], 00:18:46.198 | 70.00th=[ 42], 80.00th=[ 42], 90.00th=[ 43], 95.00th=[ 49], 00:18:46.198 | 99.00th=[ 129], 99.50th=[ 134], 99.90th=[ 138], 99.95th=[ 138], 00:18:46.198 | 99.99th=[ 140] 00:18:46.198 bw ( KiB/s): min=135168, max=412672, per=22.32%, avg=382643.40, stdev=60533.33, samples=20 00:18:46.198 iops : min= 528, max= 1612, avg=1494.70, stdev=236.46, samples=20 00:18:46.198 lat (msec) : 10=0.01%, 20=0.21%, 50=95.58%, 100=2.66%, 250=1.55% 00:18:46.198 cpu : usr=1.93%, sys=3.25%, ctx=19452, majf=0, minf=1 00:18:46.198 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:18:46.198 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:46.198 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:46.198 issued rwts: total=0,15013,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:46.198 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:46.198 job3: (groupid=0, jobs=1): err= 0: pid=78758: Tue May 14 02:15:59 2024 00:18:46.198 write: IOPS=568, BW=142MiB/s (149MB/s)(1437MiB/10102msec); 0 zone resets 00:18:46.198 slat (usec): min=17, max=35580, avg=1735.42, stdev=2996.33 00:18:46.198 clat (msec): min=5, max=229, avg=110.71, stdev=13.22 00:18:46.198 lat (msec): min=5, max=229, avg=112.45, stdev=13.06 00:18:46.198 clat percentiles (msec): 00:18:46.198 | 1.00th=[ 101], 5.00th=[ 102], 10.00th=[ 103], 20.00th=[ 105], 00:18:46.198 | 30.00th=[ 108], 40.00th=[ 109], 50.00th=[ 109], 60.00th=[ 110], 00:18:46.198 | 70.00th=[ 110], 80.00th=[ 111], 90.00th=[ 125], 95.00th=[ 136], 00:18:46.198 | 99.00th=[ 153], 99.50th=[ 174], 99.90th=[ 222], 99.95th=[ 222], 00:18:46.198 | 99.99th=[ 230] 00:18:46.198 bw ( KiB/s): min=111104, max=153600, per=8.49%, avg=145484.80, stdev=10929.89, samples=20 00:18:46.198 iops : min= 434, max= 600, avg=568.30, stdev=42.69, samples=20 00:18:46.198 lat (msec) : 10=0.23%, 20=0.07%, 100=0.64%, 250=99.06% 00:18:46.198 cpu : usr=0.90%, sys=1.64%, ctx=8747, majf=0, minf=1 00:18:46.199 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:18:46.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:46.199 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:46.199 issued rwts: total=0,5746,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:46.199 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:46.199 job4: (groupid=0, jobs=1): err= 0: pid=78759: Tue May 14 02:15:59 2024 00:18:46.199 write: IOPS=364, BW=91.1MiB/s (95.6MB/s)(929MiB/10190msec); 0 zone resets 00:18:46.199 slat (usec): min=18, max=23852, avg=2672.76, stdev=4725.90 00:18:46.199 clat (msec): min=6, max=405, avg=172.77, stdev=31.55 00:18:46.199 lat (msec): min=6, max=405, avg=175.44, stdev=31.69 00:18:46.199 clat percentiles (msec): 00:18:46.199 | 1.00th=[ 29], 5.00th=[ 136], 10.00th=[ 159], 20.00th=[ 167], 00:18:46.199 | 30.00th=[ 171], 40.00th=[ 174], 50.00th=[ 176], 60.00th=[ 178], 00:18:46.199 | 70.00th=[ 182], 80.00th=[ 184], 90.00th=[ 188], 95.00th=[ 194], 00:18:46.199 | 99.00th=[ 271], 99.50th=[ 330], 99.90th=[ 393], 99.95th=[ 405], 00:18:46.199 | 99.99th=[ 405] 00:18:46.199 bw ( KiB/s): min=81920, max=140569, per=5.45%, avg=93479.65, stdev=11646.77, samples=20 00:18:46.199 iops : min= 320, max= 549, avg=365.15, stdev=45.47, samples=20 00:18:46.199 lat (msec) : 10=0.32%, 20=0.32%, 50=0.94%, 100=1.45%, 250=95.83% 00:18:46.199 lat (msec) : 500=1.13% 00:18:46.199 cpu : usr=0.81%, sys=1.03%, ctx=2135, majf=0, minf=1 00:18:46.199 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:18:46.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:46.199 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:46.199 issued rwts: total=0,3715,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:46.199 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:46.199 job5: (groupid=0, jobs=1): err= 0: pid=78760: Tue May 14 02:15:59 2024 00:18:46.199 write: IOPS=353, BW=88.4MiB/s (92.7MB/s)(900MiB/10186msec); 0 zone resets 00:18:46.199 slat (usec): min=18, max=44162, avg=2772.42, stdev=4883.43 00:18:46.199 clat (msec): min=21, max=409, avg=178.13, stdev=24.06 00:18:46.199 lat (msec): min=21, max=409, avg=180.90, stdev=23.86 00:18:46.199 clat percentiles (msec): 00:18:46.199 | 1.00th=[ 129], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 169], 00:18:46.199 | 30.00th=[ 174], 40.00th=[ 176], 50.00th=[ 178], 60.00th=[ 180], 00:18:46.199 | 70.00th=[ 182], 80.00th=[ 184], 90.00th=[ 190], 95.00th=[ 207], 00:18:46.199 | 99.00th=[ 275], 99.50th=[ 338], 99.90th=[ 397], 99.95th=[ 409], 00:18:46.199 | 99.99th=[ 409] 00:18:46.199 bw ( KiB/s): min=80384, max=96256, per=5.28%, avg=90563.60, stdev=4542.71, samples=20 00:18:46.199 iops : min= 314, max= 376, avg=353.75, stdev=17.74, samples=20 00:18:46.199 lat (msec) : 50=0.44%, 100=0.22%, 250=98.17%, 500=1.17% 00:18:46.199 cpu : usr=0.61%, sys=1.09%, ctx=4207, majf=0, minf=1 00:18:46.199 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:18:46.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:46.199 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:46.199 issued rwts: total=0,3601,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:46.199 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:46.199 job6: (groupid=0, jobs=1): err= 0: pid=78761: Tue May 14 02:15:59 2024 00:18:46.199 write: IOPS=570, BW=143MiB/s (150MB/s)(1441MiB/10097msec); 0 zone resets 00:18:46.199 slat (usec): min=17, max=18454, avg=1730.42, stdev=2956.27 00:18:46.199 clat (msec): min=18, max=221, avg=110.36, stdev=11.68 00:18:46.199 lat (msec): min=18, max=221, avg=112.09, stdev=11.48 00:18:46.199 clat percentiles (msec): 00:18:46.199 | 1.00th=[ 101], 5.00th=[ 102], 10.00th=[ 103], 20.00th=[ 105], 00:18:46.199 | 30.00th=[ 108], 40.00th=[ 109], 50.00th=[ 109], 60.00th=[ 110], 00:18:46.199 | 70.00th=[ 110], 80.00th=[ 111], 90.00th=[ 125], 95.00th=[ 133], 00:18:46.199 | 99.00th=[ 144], 99.50th=[ 165], 99.90th=[ 213], 99.95th=[ 213], 00:18:46.199 | 99.99th=[ 222] 00:18:46.199 bw ( KiB/s): min=118784, max=153600, per=8.51%, avg=145920.00, stdev=9497.64, samples=20 00:18:46.199 iops : min= 464, max= 600, avg=570.00, stdev=37.10, samples=20 00:18:46.199 lat (msec) : 20=0.07%, 50=0.28%, 100=0.87%, 250=98.79% 00:18:46.199 cpu : usr=0.92%, sys=1.75%, ctx=6873, majf=0, minf=1 00:18:46.199 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:18:46.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:46.199 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:46.199 issued rwts: total=0,5763,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:46.199 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:46.199 job7: (groupid=0, jobs=1): err= 0: pid=78762: Tue May 14 02:15:59 2024 00:18:46.199 write: IOPS=405, BW=101MiB/s (106MB/s)(1032MiB/10183msec); 0 zone resets 00:18:46.199 slat (usec): min=17, max=24895, avg=2405.18, stdev=4533.46 00:18:46.199 clat (msec): min=10, max=407, avg=155.46, stdev=57.08 00:18:46.199 lat (msec): min=10, max=407, avg=157.87, stdev=57.78 00:18:46.199 clat percentiles (msec): 00:18:46.199 | 1.00th=[ 39], 5.00th=[ 40], 10.00th=[ 42], 20.00th=[ 159], 00:18:46.199 | 30.00th=[ 171], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 180], 00:18:46.199 | 70.00th=[ 184], 80.00th=[ 186], 90.00th=[ 190], 95.00th=[ 194], 00:18:46.199 | 99.00th=[ 259], 99.50th=[ 334], 99.90th=[ 393], 99.95th=[ 393], 00:18:46.199 | 99.99th=[ 409] 00:18:46.199 bw ( KiB/s): min=80384, max=366592, per=6.07%, avg=104012.80, stdev=61947.72, samples=20 00:18:46.199 iops : min= 314, max= 1432, avg=406.30, stdev=241.98, samples=20 00:18:46.199 lat (msec) : 20=0.29%, 50=15.85%, 100=2.40%, 250=80.44%, 500=1.02% 00:18:46.199 cpu : usr=0.68%, sys=1.18%, ctx=3972, majf=0, minf=1 00:18:46.199 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:18:46.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:46.199 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:46.199 issued rwts: total=0,4126,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:46.199 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:46.199 job8: (groupid=0, jobs=1): err= 0: pid=78766: Tue May 14 02:15:59 2024 00:18:46.199 write: IOPS=358, BW=89.7MiB/s (94.1MB/s)(914MiB/10187msec); 0 zone resets 00:18:46.199 slat (usec): min=16, max=22664, avg=2732.37, stdev=4771.59 00:18:46.199 clat (msec): min=17, max=402, avg=175.51, stdev=25.73 00:18:46.199 lat (msec): min=17, max=402, avg=178.24, stdev=25.62 00:18:46.199 clat percentiles (msec): 00:18:46.199 | 1.00th=[ 79], 5.00th=[ 140], 10.00th=[ 163], 20.00th=[ 169], 00:18:46.199 | 30.00th=[ 174], 40.00th=[ 176], 50.00th=[ 178], 60.00th=[ 180], 00:18:46.199 | 70.00th=[ 182], 80.00th=[ 184], 90.00th=[ 186], 95.00th=[ 190], 00:18:46.199 | 99.00th=[ 271], 99.50th=[ 330], 99.90th=[ 388], 99.95th=[ 405], 00:18:46.199 | 99.99th=[ 405] 00:18:46.199 bw ( KiB/s): min=80384, max=116736, per=5.37%, avg=91980.80, stdev=6659.68, samples=20 00:18:46.199 iops : min= 314, max= 456, avg=359.30, stdev=26.01, samples=20 00:18:46.199 lat (msec) : 20=0.11%, 50=0.44%, 100=0.77%, 250=97.54%, 500=1.15% 00:18:46.199 cpu : usr=0.58%, sys=0.85%, ctx=4498, majf=0, minf=1 00:18:46.199 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:18:46.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:46.199 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:46.199 issued rwts: total=0,3656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:46.199 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:46.199 job9: (groupid=0, jobs=1): err= 0: pid=78770: Tue May 14 02:15:59 2024 00:18:46.199 write: IOPS=566, BW=142MiB/s (149MB/s)(1432MiB/10099msec); 0 zone resets 00:18:46.199 slat (usec): min=17, max=43346, avg=1742.74, stdev=3030.82 00:18:46.199 clat (msec): min=4, max=225, avg=111.08, stdev=12.84 00:18:46.199 lat (msec): min=4, max=225, avg=112.82, stdev=12.65 00:18:46.199 clat percentiles (msec): 00:18:46.199 | 1.00th=[ 101], 5.00th=[ 102], 10.00th=[ 103], 20.00th=[ 105], 00:18:46.199 | 30.00th=[ 108], 40.00th=[ 109], 50.00th=[ 109], 60.00th=[ 110], 00:18:46.199 | 70.00th=[ 111], 80.00th=[ 111], 90.00th=[ 125], 95.00th=[ 136], 00:18:46.199 | 99.00th=[ 159], 99.50th=[ 180], 99.90th=[ 218], 99.95th=[ 218], 00:18:46.199 | 99.99th=[ 226] 00:18:46.199 bw ( KiB/s): min=101376, max=153600, per=8.46%, avg=144957.95, stdev=12718.86, samples=20 00:18:46.199 iops : min= 396, max= 600, avg=566.20, stdev=49.67, samples=20 00:18:46.199 lat (msec) : 10=0.03%, 50=0.21%, 100=0.38%, 250=99.37% 00:18:46.199 cpu : usr=0.73%, sys=1.45%, ctx=8364, majf=0, minf=1 00:18:46.199 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:18:46.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:46.199 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:46.199 issued rwts: total=0,5726,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:46.199 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:46.199 job10: (groupid=0, jobs=1): err= 0: pid=78771: Tue May 14 02:15:59 2024 00:18:46.199 write: IOPS=857, BW=214MiB/s (225MB/s)(2157MiB/10065msec); 0 zone resets 00:18:46.199 slat (usec): min=17, max=18812, avg=1154.10, stdev=1949.41 00:18:46.199 clat (msec): min=2, max=140, avg=73.50, stdev= 4.86 00:18:46.199 lat (msec): min=2, max=140, avg=74.65, stdev= 4.61 00:18:46.199 clat percentiles (msec): 00:18:46.199 | 1.00th=[ 69], 5.00th=[ 70], 10.00th=[ 70], 20.00th=[ 71], 00:18:46.199 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 74], 60.00th=[ 75], 00:18:46.199 | 70.00th=[ 75], 80.00th=[ 75], 90.00th=[ 77], 95.00th=[ 77], 00:18:46.199 | 99.00th=[ 79], 99.50th=[ 93], 99.90th=[ 131], 99.95th=[ 136], 00:18:46.199 | 99.99th=[ 140] 00:18:46.199 bw ( KiB/s): min=210340, max=221696, per=12.79%, avg=219233.80, stdev=2677.00, samples=20 00:18:46.199 iops : min= 821, max= 866, avg=856.35, stdev=10.57, samples=20 00:18:46.199 lat (msec) : 4=0.05%, 10=0.01%, 20=0.09%, 50=0.14%, 100=99.32% 00:18:46.199 lat (msec) : 250=0.39% 00:18:46.199 cpu : usr=1.45%, sys=2.14%, ctx=10016, majf=0, minf=1 00:18:46.199 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:18:46.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:46.199 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:46.199 issued rwts: total=0,8626,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:46.199 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:46.199 00:18:46.199 Run status group 0 (all jobs): 00:18:46.199 WRITE: bw=1674MiB/s (1755MB/s), 88.4MiB/s-374MiB/s (92.7MB/s-392MB/s), io=16.7GiB (17.9GB), run=10039-10194msec 00:18:46.199 00:18:46.199 Disk stats (read/write): 00:18:46.199 nvme0n1: ios=49/17065, merge=0/0, ticks=33/1210039, in_queue=1210072, util=97.46% 00:18:46.199 nvme10n1: ios=49/7111, merge=0/0, ticks=44/1203060, in_queue=1203104, util=97.78% 00:18:46.199 nvme1n1: ios=28/29703, merge=0/0, ticks=20/1212529, in_queue=1212549, util=97.60% 00:18:46.199 nvme2n1: ios=0/11321, merge=0/0, ticks=0/1208878, in_queue=1208878, util=97.82% 00:18:46.199 nvme3n1: ios=5/7275, merge=0/0, ticks=169/1204053, in_queue=1204222, util=98.29% 00:18:46.199 nvme4n1: ios=0/7045, merge=0/0, ticks=0/1202312, in_queue=1202312, util=98.06% 00:18:46.200 nvme5n1: ios=0/11342, merge=0/0, ticks=0/1207903, in_queue=1207903, util=98.18% 00:18:46.200 nvme6n1: ios=0/8092, merge=0/0, ticks=0/1201648, in_queue=1201648, util=98.26% 00:18:46.200 nvme7n1: ios=0/7150, merge=0/0, ticks=0/1202287, in_queue=1202287, util=98.60% 00:18:46.200 nvme8n1: ios=0/11273, merge=0/0, ticks=0/1208546, in_queue=1208546, util=98.82% 00:18:46.200 nvme9n1: ios=0/17031, merge=0/0, ticks=0/1210817, in_queue=1210817, util=98.90% 00:18:46.200 02:15:59 -- target/multiconnection.sh@36 -- # sync 00:18:46.200 02:15:59 -- target/multiconnection.sh@37 -- # seq 1 11 00:18:46.200 02:15:59 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:46.200 02:15:59 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:46.200 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:46.200 02:15:59 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:18:46.200 02:15:59 -- common/autotest_common.sh@1198 -- # local i=0 00:18:46.200 02:15:59 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:46.200 02:15:59 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK1 00:18:46.200 02:15:59 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:46.200 02:15:59 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:18:46.200 02:15:59 -- common/autotest_common.sh@1210 -- # return 0 00:18:46.200 02:15:59 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:46.200 02:15:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:46.200 02:15:59 -- common/autotest_common.sh@10 -- # set +x 00:18:46.200 02:15:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:46.200 02:15:59 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:46.200 02:15:59 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:18:46.200 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:18:46.200 02:15:59 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:18:46.200 02:15:59 -- common/autotest_common.sh@1198 -- # local i=0 00:18:46.200 02:15:59 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:46.200 02:15:59 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK2 00:18:46.200 02:15:59 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:46.200 02:15:59 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:18:46.200 02:15:59 -- common/autotest_common.sh@1210 -- # return 0 00:18:46.200 02:15:59 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:18:46.200 02:15:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:46.200 02:15:59 -- common/autotest_common.sh@10 -- # set +x 00:18:46.200 02:15:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:46.200 02:15:59 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:46.200 02:15:59 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:18:46.200 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:18:46.200 02:15:59 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:18:46.200 02:15:59 -- common/autotest_common.sh@1198 -- # local i=0 00:18:46.200 02:15:59 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:46.200 02:15:59 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK3 00:18:46.200 02:15:59 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:46.200 02:15:59 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:18:46.200 02:15:59 -- common/autotest_common.sh@1210 -- # return 0 00:18:46.200 02:15:59 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:18:46.200 02:15:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:46.200 02:15:59 -- common/autotest_common.sh@10 -- # set +x 00:18:46.200 02:15:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:46.200 02:15:59 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:46.200 02:15:59 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:18:46.200 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:18:46.200 02:15:59 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:18:46.200 02:15:59 -- common/autotest_common.sh@1198 -- # local i=0 00:18:46.200 02:15:59 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK4 00:18:46.200 02:15:59 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:46.200 02:15:59 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:46.200 02:15:59 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:18:46.200 02:15:59 -- common/autotest_common.sh@1210 -- # return 0 00:18:46.200 02:15:59 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:18:46.200 02:15:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:46.200 02:15:59 -- common/autotest_common.sh@10 -- # set +x 00:18:46.200 02:15:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:46.200 02:15:59 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:46.200 02:15:59 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:18:46.200 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:18:46.200 02:16:00 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:18:46.200 02:16:00 -- common/autotest_common.sh@1198 -- # local i=0 00:18:46.200 02:16:00 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:46.200 02:16:00 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK5 00:18:46.200 02:16:00 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:46.200 02:16:00 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:18:46.200 02:16:00 -- common/autotest_common.sh@1210 -- # return 0 00:18:46.200 02:16:00 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:18:46.200 02:16:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:46.200 02:16:00 -- common/autotest_common.sh@10 -- # set +x 00:18:46.200 02:16:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:46.200 02:16:00 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:46.200 02:16:00 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:18:46.200 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:18:46.200 02:16:00 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:18:46.200 02:16:00 -- common/autotest_common.sh@1198 -- # local i=0 00:18:46.200 02:16:00 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:46.200 02:16:00 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK6 00:18:46.200 02:16:00 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:18:46.200 02:16:00 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:46.200 02:16:00 -- common/autotest_common.sh@1210 -- # return 0 00:18:46.200 02:16:00 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:18:46.200 02:16:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:46.200 02:16:00 -- common/autotest_common.sh@10 -- # set +x 00:18:46.200 02:16:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:46.200 02:16:00 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:46.200 02:16:00 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:18:46.200 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:18:46.200 02:16:00 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:18:46.200 02:16:00 -- common/autotest_common.sh@1198 -- # local i=0 00:18:46.200 02:16:00 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:46.200 02:16:00 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK7 00:18:46.200 02:16:00 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:46.200 02:16:00 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:18:46.200 02:16:00 -- common/autotest_common.sh@1210 -- # return 0 00:18:46.200 02:16:00 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:18:46.200 02:16:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:46.200 02:16:00 -- common/autotest_common.sh@10 -- # set +x 00:18:46.200 02:16:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:46.200 02:16:00 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:46.200 02:16:00 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:18:46.200 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:18:46.200 02:16:00 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:18:46.200 02:16:00 -- common/autotest_common.sh@1198 -- # local i=0 00:18:46.200 02:16:00 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:46.200 02:16:00 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK8 00:18:46.200 02:16:00 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:46.200 02:16:00 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:18:46.200 02:16:00 -- common/autotest_common.sh@1210 -- # return 0 00:18:46.200 02:16:00 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:18:46.200 02:16:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:46.200 02:16:00 -- common/autotest_common.sh@10 -- # set +x 00:18:46.200 02:16:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:46.200 02:16:00 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:46.200 02:16:00 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:18:46.200 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:18:46.200 02:16:00 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:18:46.200 02:16:00 -- common/autotest_common.sh@1198 -- # local i=0 00:18:46.200 02:16:00 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:46.200 02:16:00 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK9 00:18:46.200 02:16:00 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:46.200 02:16:00 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:18:46.200 02:16:00 -- common/autotest_common.sh@1210 -- # return 0 00:18:46.200 02:16:00 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:18:46.200 02:16:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:46.200 02:16:00 -- common/autotest_common.sh@10 -- # set +x 00:18:46.200 02:16:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:46.200 02:16:00 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:46.200 02:16:00 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:18:46.200 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:18:46.200 02:16:00 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:18:46.200 02:16:00 -- common/autotest_common.sh@1198 -- # local i=0 00:18:46.200 02:16:00 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:46.200 02:16:00 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK10 00:18:46.200 02:16:00 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:18:46.200 02:16:00 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:46.200 02:16:00 -- common/autotest_common.sh@1210 -- # return 0 00:18:46.200 02:16:00 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:18:46.200 02:16:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:46.200 02:16:00 -- common/autotest_common.sh@10 -- # set +x 00:18:46.200 02:16:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:46.200 02:16:00 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:46.200 02:16:00 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:18:46.200 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:18:46.200 02:16:00 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:18:46.200 02:16:00 -- common/autotest_common.sh@1198 -- # local i=0 00:18:46.201 02:16:00 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:46.201 02:16:00 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK11 00:18:46.201 02:16:00 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:46.201 02:16:00 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:18:46.201 02:16:00 -- common/autotest_common.sh@1210 -- # return 0 00:18:46.201 02:16:00 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:18:46.201 02:16:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:46.201 02:16:00 -- common/autotest_common.sh@10 -- # set +x 00:18:46.201 02:16:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:46.201 02:16:00 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:18:46.201 02:16:00 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:18:46.201 02:16:00 -- target/multiconnection.sh@47 -- # nvmftestfini 00:18:46.201 02:16:00 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:46.201 02:16:00 -- nvmf/common.sh@116 -- # sync 00:18:46.201 02:16:00 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:46.201 02:16:00 -- nvmf/common.sh@119 -- # set +e 00:18:46.201 02:16:00 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:46.201 02:16:00 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:46.201 rmmod nvme_tcp 00:18:46.201 rmmod nvme_fabrics 00:18:46.201 rmmod nvme_keyring 00:18:46.201 02:16:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:46.201 02:16:00 -- nvmf/common.sh@123 -- # set -e 00:18:46.201 02:16:00 -- nvmf/common.sh@124 -- # return 0 00:18:46.201 02:16:00 -- nvmf/common.sh@477 -- # '[' -n 78060 ']' 00:18:46.201 02:16:00 -- nvmf/common.sh@478 -- # killprocess 78060 00:18:46.201 02:16:00 -- common/autotest_common.sh@926 -- # '[' -z 78060 ']' 00:18:46.201 02:16:00 -- common/autotest_common.sh@930 -- # kill -0 78060 00:18:46.201 02:16:00 -- common/autotest_common.sh@931 -- # uname 00:18:46.201 02:16:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:46.201 02:16:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 78060 00:18:46.201 02:16:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:46.201 killing process with pid 78060 00:18:46.201 02:16:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:46.201 02:16:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 78060' 00:18:46.201 02:16:00 -- common/autotest_common.sh@945 -- # kill 78060 00:18:46.201 02:16:00 -- common/autotest_common.sh@950 -- # wait 78060 00:18:46.508 02:16:01 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:46.508 02:16:01 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:46.508 02:16:01 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:46.508 02:16:01 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:46.508 02:16:01 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:46.508 02:16:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:46.508 02:16:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:46.508 02:16:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:46.773 02:16:01 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:46.773 ************************************ 00:18:46.773 END TEST nvmf_multiconnection 00:18:46.773 00:18:46.773 real 0m49.214s 00:18:46.773 user 2m44.324s 00:18:46.773 sys 0m25.431s 00:18:46.773 02:16:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:46.773 02:16:01 -- common/autotest_common.sh@10 -- # set +x 00:18:46.773 ************************************ 00:18:46.773 02:16:01 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:18:46.773 02:16:01 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:46.773 02:16:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:46.773 02:16:01 -- common/autotest_common.sh@10 -- # set +x 00:18:46.773 ************************************ 00:18:46.773 START TEST nvmf_initiator_timeout 00:18:46.773 ************************************ 00:18:46.773 02:16:01 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:18:46.773 * Looking for test storage... 00:18:46.773 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:46.773 02:16:01 -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:46.773 02:16:01 -- nvmf/common.sh@7 -- # uname -s 00:18:46.773 02:16:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:46.773 02:16:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:46.773 02:16:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:46.773 02:16:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:46.773 02:16:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:46.773 02:16:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:46.773 02:16:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:46.773 02:16:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:46.773 02:16:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:46.773 02:16:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:46.773 02:16:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:18:46.773 02:16:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:18:46.773 02:16:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:46.773 02:16:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:46.773 02:16:01 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:46.773 02:16:01 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:46.773 02:16:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:46.773 02:16:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:46.773 02:16:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:46.773 02:16:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.773 02:16:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.773 02:16:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.773 02:16:01 -- paths/export.sh@5 -- # export PATH 00:18:46.773 02:16:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.773 02:16:01 -- nvmf/common.sh@46 -- # : 0 00:18:46.773 02:16:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:46.773 02:16:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:46.773 02:16:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:46.773 02:16:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:46.773 02:16:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:46.773 02:16:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:46.773 02:16:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:46.773 02:16:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:46.773 02:16:01 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:46.773 02:16:01 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:46.773 02:16:01 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:18:46.773 02:16:01 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:46.773 02:16:01 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:46.773 02:16:01 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:46.773 02:16:01 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:46.773 02:16:01 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:46.773 02:16:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:46.773 02:16:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:46.773 02:16:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:46.773 02:16:01 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:46.773 02:16:01 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:46.773 02:16:01 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:46.773 02:16:01 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:46.773 02:16:01 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:46.773 02:16:01 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:46.773 02:16:01 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:46.773 02:16:01 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:46.773 02:16:01 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:46.773 02:16:01 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:46.773 02:16:01 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:46.773 02:16:01 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:46.773 02:16:01 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:46.773 02:16:01 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:46.773 02:16:01 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:46.773 02:16:01 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:46.773 02:16:01 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:46.773 02:16:01 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:46.773 02:16:01 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:46.773 02:16:01 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:46.773 Cannot find device "nvmf_tgt_br" 00:18:46.773 02:16:01 -- nvmf/common.sh@154 -- # true 00:18:46.773 02:16:01 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:46.773 Cannot find device "nvmf_tgt_br2" 00:18:46.773 02:16:01 -- nvmf/common.sh@155 -- # true 00:18:46.773 02:16:01 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:46.773 02:16:01 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:46.773 Cannot find device "nvmf_tgt_br" 00:18:46.773 02:16:01 -- nvmf/common.sh@157 -- # true 00:18:46.773 02:16:01 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:46.773 Cannot find device "nvmf_tgt_br2" 00:18:46.773 02:16:01 -- nvmf/common.sh@158 -- # true 00:18:46.773 02:16:01 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:47.032 02:16:01 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:47.032 02:16:01 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:47.032 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:47.032 02:16:01 -- nvmf/common.sh@161 -- # true 00:18:47.032 02:16:01 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:47.032 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:47.032 02:16:01 -- nvmf/common.sh@162 -- # true 00:18:47.032 02:16:01 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:47.032 02:16:01 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:47.032 02:16:01 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:47.032 02:16:01 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:47.032 02:16:01 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:47.032 02:16:01 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:47.032 02:16:01 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:47.032 02:16:01 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:47.032 02:16:01 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:47.032 02:16:01 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:47.032 02:16:01 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:47.032 02:16:01 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:47.032 02:16:01 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:47.032 02:16:01 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:47.032 02:16:01 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:47.032 02:16:01 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:47.032 02:16:01 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:47.032 02:16:01 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:47.032 02:16:01 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:47.032 02:16:01 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:47.032 02:16:01 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:47.032 02:16:01 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:47.032 02:16:01 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:47.032 02:16:01 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:47.032 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:47.032 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:18:47.032 00:18:47.032 --- 10.0.0.2 ping statistics --- 00:18:47.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:47.032 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:18:47.032 02:16:01 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:47.032 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:47.032 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:18:47.032 00:18:47.032 --- 10.0.0.3 ping statistics --- 00:18:47.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:47.032 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:18:47.032 02:16:01 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:47.032 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:47.032 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:18:47.032 00:18:47.032 --- 10.0.0.1 ping statistics --- 00:18:47.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:47.032 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:18:47.032 02:16:01 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:47.032 02:16:01 -- nvmf/common.sh@421 -- # return 0 00:18:47.032 02:16:01 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:47.032 02:16:01 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:47.032 02:16:01 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:47.032 02:16:01 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:47.032 02:16:01 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:47.032 02:16:01 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:47.032 02:16:01 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:47.032 02:16:01 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:18:47.032 02:16:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:47.032 02:16:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:47.032 02:16:01 -- common/autotest_common.sh@10 -- # set +x 00:18:47.032 02:16:01 -- nvmf/common.sh@469 -- # nvmfpid=79132 00:18:47.032 02:16:01 -- nvmf/common.sh@470 -- # waitforlisten 79132 00:18:47.032 02:16:01 -- common/autotest_common.sh@819 -- # '[' -z 79132 ']' 00:18:47.032 02:16:01 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:47.032 02:16:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:47.032 02:16:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:47.032 02:16:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:47.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:47.033 02:16:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:47.033 02:16:01 -- common/autotest_common.sh@10 -- # set +x 00:18:47.290 [2024-05-14 02:16:01.673968] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:18:47.290 [2024-05-14 02:16:01.674069] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:47.290 [2024-05-14 02:16:01.816646] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:47.549 [2024-05-14 02:16:01.886037] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:47.549 [2024-05-14 02:16:01.886208] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:47.549 [2024-05-14 02:16:01.886224] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:47.549 [2024-05-14 02:16:01.886235] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:47.549 [2024-05-14 02:16:01.886363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:47.549 [2024-05-14 02:16:01.886602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:47.549 [2024-05-14 02:16:01.886510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:47.549 [2024-05-14 02:16:01.886594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:48.484 02:16:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:48.484 02:16:02 -- common/autotest_common.sh@852 -- # return 0 00:18:48.484 02:16:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:48.484 02:16:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:48.484 02:16:02 -- common/autotest_common.sh@10 -- # set +x 00:18:48.484 02:16:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:48.484 02:16:02 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:48.484 02:16:02 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:48.484 02:16:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:48.484 02:16:02 -- common/autotest_common.sh@10 -- # set +x 00:18:48.484 Malloc0 00:18:48.484 02:16:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:48.484 02:16:02 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:18:48.484 02:16:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:48.484 02:16:02 -- common/autotest_common.sh@10 -- # set +x 00:18:48.484 Delay0 00:18:48.484 02:16:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:48.484 02:16:02 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:48.484 02:16:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:48.484 02:16:02 -- common/autotest_common.sh@10 -- # set +x 00:18:48.484 [2024-05-14 02:16:02.816904] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:48.484 02:16:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:48.484 02:16:02 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:48.484 02:16:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:48.484 02:16:02 -- common/autotest_common.sh@10 -- # set +x 00:18:48.484 02:16:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:48.484 02:16:02 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:48.484 02:16:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:48.484 02:16:02 -- common/autotest_common.sh@10 -- # set +x 00:18:48.484 02:16:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:48.484 02:16:02 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:48.484 02:16:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:48.484 02:16:02 -- common/autotest_common.sh@10 -- # set +x 00:18:48.484 [2024-05-14 02:16:02.845052] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:48.484 02:16:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:48.484 02:16:02 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 --hostid=01bebc16-ee64-4b1b-82ac-462e1640a9a9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:48.484 02:16:03 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:18:48.484 02:16:03 -- common/autotest_common.sh@1177 -- # local i=0 00:18:48.484 02:16:03 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:48.484 02:16:03 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:48.484 02:16:03 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:51.017 02:16:05 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:51.017 02:16:05 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:51.017 02:16:05 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:18:51.017 02:16:05 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:51.017 02:16:05 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:51.017 02:16:05 -- common/autotest_common.sh@1187 -- # return 0 00:18:51.017 02:16:05 -- target/initiator_timeout.sh@35 -- # fio_pid=79214 00:18:51.017 02:16:05 -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:18:51.017 02:16:05 -- target/initiator_timeout.sh@37 -- # sleep 3 00:18:51.017 [global] 00:18:51.017 thread=1 00:18:51.017 invalidate=1 00:18:51.017 rw=write 00:18:51.017 time_based=1 00:18:51.017 runtime=60 00:18:51.017 ioengine=libaio 00:18:51.017 direct=1 00:18:51.017 bs=4096 00:18:51.017 iodepth=1 00:18:51.017 norandommap=0 00:18:51.017 numjobs=1 00:18:51.017 00:18:51.017 verify_dump=1 00:18:51.017 verify_backlog=512 00:18:51.017 verify_state_save=0 00:18:51.017 do_verify=1 00:18:51.017 verify=crc32c-intel 00:18:51.017 [job0] 00:18:51.017 filename=/dev/nvme0n1 00:18:51.017 Could not set queue depth (nvme0n1) 00:18:51.017 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:51.017 fio-3.35 00:18:51.017 Starting 1 thread 00:18:53.549 02:16:08 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:18:53.549 02:16:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:53.549 02:16:08 -- common/autotest_common.sh@10 -- # set +x 00:18:53.549 true 00:18:53.549 02:16:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:53.549 02:16:08 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:18:53.549 02:16:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:53.549 02:16:08 -- common/autotest_common.sh@10 -- # set +x 00:18:53.549 true 00:18:53.549 02:16:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:53.549 02:16:08 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:18:53.549 02:16:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:53.549 02:16:08 -- common/autotest_common.sh@10 -- # set +x 00:18:53.549 true 00:18:53.549 02:16:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:53.549 02:16:08 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:18:53.549 02:16:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:53.549 02:16:08 -- common/autotest_common.sh@10 -- # set +x 00:18:53.549 true 00:18:53.549 02:16:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:53.549 02:16:08 -- target/initiator_timeout.sh@45 -- # sleep 3 00:18:56.831 02:16:11 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:18:56.831 02:16:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:56.831 02:16:11 -- common/autotest_common.sh@10 -- # set +x 00:18:56.831 true 00:18:56.831 02:16:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:56.831 02:16:11 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:18:56.831 02:16:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:56.831 02:16:11 -- common/autotest_common.sh@10 -- # set +x 00:18:56.831 true 00:18:56.831 02:16:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:56.831 02:16:11 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:18:56.831 02:16:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:56.831 02:16:11 -- common/autotest_common.sh@10 -- # set +x 00:18:56.832 true 00:18:56.832 02:16:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:56.832 02:16:11 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:18:56.832 02:16:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:56.832 02:16:11 -- common/autotest_common.sh@10 -- # set +x 00:18:56.832 true 00:18:56.832 02:16:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:56.832 02:16:11 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:18:56.832 02:16:11 -- target/initiator_timeout.sh@54 -- # wait 79214 00:19:53.083 00:19:53.083 job0: (groupid=0, jobs=1): err= 0: pid=79235: Tue May 14 02:17:05 2024 00:19:53.083 read: IOPS=832, BW=3331KiB/s (3411kB/s)(195MiB/60000msec) 00:19:53.083 slat (nsec): min=13201, max=74286, avg=16201.68, stdev=3148.86 00:19:53.083 clat (usec): min=157, max=582, avg=190.66, stdev=15.15 00:19:53.083 lat (usec): min=171, max=597, avg=206.86, stdev=15.82 00:19:53.083 clat percentiles (usec): 00:19:53.083 | 1.00th=[ 172], 5.00th=[ 176], 10.00th=[ 178], 20.00th=[ 182], 00:19:53.083 | 30.00th=[ 184], 40.00th=[ 186], 50.00th=[ 188], 60.00th=[ 190], 00:19:53.083 | 70.00th=[ 194], 80.00th=[ 200], 90.00th=[ 208], 95.00th=[ 217], 00:19:53.083 | 99.00th=[ 243], 99.50th=[ 260], 99.90th=[ 310], 99.95th=[ 330], 00:19:53.083 | 99.99th=[ 490] 00:19:53.083 write: IOPS=836, BW=3345KiB/s (3425kB/s)(196MiB/60000msec); 0 zone resets 00:19:53.083 slat (usec): min=19, max=15078, avg=24.57, stdev=76.07 00:19:53.083 clat (usec): min=3, max=40707k, avg=961.77, stdev=181726.63 00:19:53.083 lat (usec): min=143, max=40707k, avg=986.34, stdev=181726.65 00:19:53.083 clat percentiles (usec): 00:19:53.083 | 1.00th=[ 135], 5.00th=[ 137], 10.00th=[ 139], 20.00th=[ 143], 00:19:53.083 | 30.00th=[ 145], 40.00th=[ 147], 50.00th=[ 149], 60.00th=[ 151], 00:19:53.083 | 70.00th=[ 153], 80.00th=[ 157], 90.00th=[ 165], 95.00th=[ 172], 00:19:53.083 | 99.00th=[ 194], 99.50th=[ 204], 99.90th=[ 253], 99.95th=[ 351], 00:19:53.083 | 99.99th=[ 996] 00:19:53.083 bw ( KiB/s): min= 1512, max=12288, per=100.00%, avg=10081.90, stdev=2099.87, samples=39 00:19:53.083 iops : min= 378, max= 3072, avg=2520.46, stdev=524.96, samples=39 00:19:53.083 lat (usec) : 4=0.01%, 20=0.01%, 250=99.58%, 500=0.40%, 750=0.01% 00:19:53.083 lat (usec) : 1000=0.01% 00:19:53.083 lat (msec) : 2=0.01%, 4=0.01%, >=2000=0.01% 00:19:53.083 cpu : usr=0.59%, sys=2.52%, ctx=100169, majf=0, minf=2 00:19:53.083 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:53.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:53.083 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:53.083 issued rwts: total=49971,50176,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:53.083 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:53.083 00:19:53.083 Run status group 0 (all jobs): 00:19:53.083 READ: bw=3331KiB/s (3411kB/s), 3331KiB/s-3331KiB/s (3411kB/s-3411kB/s), io=195MiB (205MB), run=60000-60000msec 00:19:53.083 WRITE: bw=3345KiB/s (3425kB/s), 3345KiB/s-3345KiB/s (3425kB/s-3425kB/s), io=196MiB (206MB), run=60000-60000msec 00:19:53.083 00:19:53.083 Disk stats (read/write): 00:19:53.083 nvme0n1: ios=49988/49976, merge=0/0, ticks=9848/8103, in_queue=17951, util=99.78% 00:19:53.083 02:17:05 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:53.083 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:53.083 02:17:05 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:53.083 02:17:05 -- common/autotest_common.sh@1198 -- # local i=0 00:19:53.083 02:17:05 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:53.083 02:17:05 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:53.083 02:17:05 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:53.083 02:17:05 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:53.083 02:17:05 -- common/autotest_common.sh@1210 -- # return 0 00:19:53.083 02:17:05 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:19:53.083 nvmf hotplug test: fio successful as expected 00:19:53.083 02:17:05 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:19:53.083 02:17:05 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:53.083 02:17:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:53.083 02:17:05 -- common/autotest_common.sh@10 -- # set +x 00:19:53.083 02:17:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:53.083 02:17:05 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:19:53.083 02:17:05 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:19:53.083 02:17:05 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:19:53.083 02:17:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:53.083 02:17:05 -- nvmf/common.sh@116 -- # sync 00:19:53.083 02:17:05 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:53.083 02:17:05 -- nvmf/common.sh@119 -- # set +e 00:19:53.083 02:17:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:53.083 02:17:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:53.083 rmmod nvme_tcp 00:19:53.083 rmmod nvme_fabrics 00:19:53.083 rmmod nvme_keyring 00:19:53.083 02:17:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:53.083 02:17:05 -- nvmf/common.sh@123 -- # set -e 00:19:53.083 02:17:05 -- nvmf/common.sh@124 -- # return 0 00:19:53.083 02:17:05 -- nvmf/common.sh@477 -- # '[' -n 79132 ']' 00:19:53.083 02:17:05 -- nvmf/common.sh@478 -- # killprocess 79132 00:19:53.083 02:17:05 -- common/autotest_common.sh@926 -- # '[' -z 79132 ']' 00:19:53.083 02:17:05 -- common/autotest_common.sh@930 -- # kill -0 79132 00:19:53.083 02:17:05 -- common/autotest_common.sh@931 -- # uname 00:19:53.083 02:17:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:53.083 02:17:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 79132 00:19:53.083 02:17:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:53.083 02:17:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:53.083 02:17:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 79132' 00:19:53.083 killing process with pid 79132 00:19:53.083 02:17:05 -- common/autotest_common.sh@945 -- # kill 79132 00:19:53.083 02:17:05 -- common/autotest_common.sh@950 -- # wait 79132 00:19:53.083 02:17:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:53.083 02:17:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:53.083 02:17:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:53.083 02:17:05 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:53.083 02:17:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:53.083 02:17:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:53.083 02:17:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:53.083 02:17:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:53.083 02:17:05 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:53.083 00:19:53.083 real 1m4.573s 00:19:53.083 user 4m5.734s 00:19:53.083 sys 0m9.711s 00:19:53.083 02:17:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:53.083 02:17:05 -- common/autotest_common.sh@10 -- # set +x 00:19:53.083 ************************************ 00:19:53.083 END TEST nvmf_initiator_timeout 00:19:53.083 ************************************ 00:19:53.083 02:17:05 -- nvmf/nvmf.sh@69 -- # [[ virt == phy ]] 00:19:53.083 02:17:05 -- nvmf/nvmf.sh@85 -- # timing_exit target 00:19:53.083 02:17:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:53.083 02:17:05 -- common/autotest_common.sh@10 -- # set +x 00:19:53.083 02:17:05 -- nvmf/nvmf.sh@87 -- # timing_enter host 00:19:53.083 02:17:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:53.083 02:17:05 -- common/autotest_common.sh@10 -- # set +x 00:19:53.083 02:17:05 -- nvmf/nvmf.sh@89 -- # [[ 0 -eq 0 ]] 00:19:53.083 02:17:05 -- nvmf/nvmf.sh@90 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:19:53.083 02:17:05 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:53.083 02:17:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:53.083 02:17:05 -- common/autotest_common.sh@10 -- # set +x 00:19:53.083 ************************************ 00:19:53.083 START TEST nvmf_multicontroller 00:19:53.083 ************************************ 00:19:53.084 02:17:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:19:53.084 * Looking for test storage... 00:19:53.084 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:53.084 02:17:05 -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:53.084 02:17:05 -- nvmf/common.sh@7 -- # uname -s 00:19:53.084 02:17:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:53.084 02:17:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:53.084 02:17:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:53.084 02:17:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:53.084 02:17:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:53.084 02:17:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:53.084 02:17:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:53.084 02:17:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:53.084 02:17:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:53.084 02:17:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:53.084 02:17:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:19:53.084 02:17:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:19:53.084 02:17:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:53.084 02:17:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:53.084 02:17:05 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:53.084 02:17:05 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:53.084 02:17:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:53.084 02:17:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:53.084 02:17:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:53.084 02:17:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.084 02:17:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.084 02:17:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.084 02:17:05 -- paths/export.sh@5 -- # export PATH 00:19:53.084 02:17:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.084 02:17:05 -- nvmf/common.sh@46 -- # : 0 00:19:53.084 02:17:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:53.084 02:17:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:53.084 02:17:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:53.084 02:17:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:53.084 02:17:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:53.084 02:17:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:53.084 02:17:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:53.084 02:17:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:53.084 02:17:05 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:53.084 02:17:05 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:53.084 02:17:05 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:19:53.084 02:17:05 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:19:53.084 02:17:05 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:53.084 02:17:05 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:19:53.084 02:17:05 -- host/multicontroller.sh@23 -- # nvmftestinit 00:19:53.084 02:17:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:53.084 02:17:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:53.084 02:17:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:53.084 02:17:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:53.084 02:17:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:53.084 02:17:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:53.084 02:17:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:53.084 02:17:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:53.084 02:17:05 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:53.084 02:17:05 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:53.084 02:17:05 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:53.084 02:17:05 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:53.084 02:17:05 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:53.084 02:17:05 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:53.084 02:17:05 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:53.084 02:17:05 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:53.084 02:17:05 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:53.084 02:17:05 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:53.084 02:17:05 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:53.084 02:17:05 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:53.084 02:17:05 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:53.084 02:17:05 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:53.084 02:17:05 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:53.084 02:17:05 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:53.084 02:17:05 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:53.084 02:17:05 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:53.084 02:17:05 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:53.084 02:17:05 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:53.084 Cannot find device "nvmf_tgt_br" 00:19:53.084 02:17:05 -- nvmf/common.sh@154 -- # true 00:19:53.084 02:17:05 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:53.084 Cannot find device "nvmf_tgt_br2" 00:19:53.084 02:17:05 -- nvmf/common.sh@155 -- # true 00:19:53.084 02:17:05 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:53.084 02:17:05 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:53.084 Cannot find device "nvmf_tgt_br" 00:19:53.084 02:17:05 -- nvmf/common.sh@157 -- # true 00:19:53.084 02:17:05 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:53.084 Cannot find device "nvmf_tgt_br2" 00:19:53.084 02:17:05 -- nvmf/common.sh@158 -- # true 00:19:53.084 02:17:05 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:53.084 02:17:06 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:53.084 02:17:06 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:53.084 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:53.084 02:17:06 -- nvmf/common.sh@161 -- # true 00:19:53.084 02:17:06 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:53.084 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:53.084 02:17:06 -- nvmf/common.sh@162 -- # true 00:19:53.084 02:17:06 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:53.084 02:17:06 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:53.084 02:17:06 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:53.084 02:17:06 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:53.084 02:17:06 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:53.084 02:17:06 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:53.084 02:17:06 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:53.084 02:17:06 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:53.084 02:17:06 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:53.084 02:17:06 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:53.084 02:17:06 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:53.084 02:17:06 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:53.084 02:17:06 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:53.084 02:17:06 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:53.084 02:17:06 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:53.084 02:17:06 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:53.084 02:17:06 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:53.084 02:17:06 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:53.084 02:17:06 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:53.084 02:17:06 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:53.084 02:17:06 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:53.084 02:17:06 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:53.084 02:17:06 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:53.084 02:17:06 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:53.084 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:53.084 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:19:53.084 00:19:53.084 --- 10.0.0.2 ping statistics --- 00:19:53.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:53.084 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:19:53.084 02:17:06 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:53.084 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:53.084 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:19:53.084 00:19:53.084 --- 10.0.0.3 ping statistics --- 00:19:53.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:53.084 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:19:53.084 02:17:06 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:53.084 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:53.084 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:19:53.084 00:19:53.084 --- 10.0.0.1 ping statistics --- 00:19:53.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:53.084 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:19:53.084 02:17:06 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:53.084 02:17:06 -- nvmf/common.sh@421 -- # return 0 00:19:53.085 02:17:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:53.085 02:17:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:53.085 02:17:06 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:53.085 02:17:06 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:53.085 02:17:06 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:53.085 02:17:06 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:53.085 02:17:06 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:53.085 02:17:06 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:19:53.085 02:17:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:53.085 02:17:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:53.085 02:17:06 -- common/autotest_common.sh@10 -- # set +x 00:19:53.085 02:17:06 -- nvmf/common.sh@469 -- # nvmfpid=80063 00:19:53.085 02:17:06 -- nvmf/common.sh@470 -- # waitforlisten 80063 00:19:53.085 02:17:06 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:53.085 02:17:06 -- common/autotest_common.sh@819 -- # '[' -z 80063 ']' 00:19:53.085 02:17:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:53.085 02:17:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:53.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:53.085 02:17:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:53.085 02:17:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:53.085 02:17:06 -- common/autotest_common.sh@10 -- # set +x 00:19:53.085 [2024-05-14 02:17:06.329635] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:53.085 [2024-05-14 02:17:06.329756] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:53.085 [2024-05-14 02:17:06.469131] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:53.085 [2024-05-14 02:17:06.522552] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:53.085 [2024-05-14 02:17:06.522684] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:53.085 [2024-05-14 02:17:06.522697] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:53.085 [2024-05-14 02:17:06.522706] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:53.085 [2024-05-14 02:17:06.522850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:53.085 [2024-05-14 02:17:06.523341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:53.085 [2024-05-14 02:17:06.523477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:53.085 02:17:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:53.085 02:17:07 -- common/autotest_common.sh@852 -- # return 0 00:19:53.085 02:17:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:53.085 02:17:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:53.085 02:17:07 -- common/autotest_common.sh@10 -- # set +x 00:19:53.085 02:17:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:53.085 02:17:07 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:53.085 02:17:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:53.085 02:17:07 -- common/autotest_common.sh@10 -- # set +x 00:19:53.085 [2024-05-14 02:17:07.395085] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:53.085 02:17:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:53.085 02:17:07 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:53.085 02:17:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:53.085 02:17:07 -- common/autotest_common.sh@10 -- # set +x 00:19:53.085 Malloc0 00:19:53.085 02:17:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:53.085 02:17:07 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:53.085 02:17:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:53.085 02:17:07 -- common/autotest_common.sh@10 -- # set +x 00:19:53.085 02:17:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:53.085 02:17:07 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:53.085 02:17:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:53.085 02:17:07 -- common/autotest_common.sh@10 -- # set +x 00:19:53.085 02:17:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:53.085 02:17:07 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:53.085 02:17:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:53.085 02:17:07 -- common/autotest_common.sh@10 -- # set +x 00:19:53.085 [2024-05-14 02:17:07.450436] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:53.085 02:17:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:53.085 02:17:07 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:53.085 02:17:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:53.085 02:17:07 -- common/autotest_common.sh@10 -- # set +x 00:19:53.085 [2024-05-14 02:17:07.458421] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:53.085 02:17:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:53.085 02:17:07 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:53.085 02:17:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:53.085 02:17:07 -- common/autotest_common.sh@10 -- # set +x 00:19:53.085 Malloc1 00:19:53.085 02:17:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:53.085 02:17:07 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:19:53.085 02:17:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:53.085 02:17:07 -- common/autotest_common.sh@10 -- # set +x 00:19:53.085 02:17:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:53.085 02:17:07 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:19:53.085 02:17:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:53.085 02:17:07 -- common/autotest_common.sh@10 -- # set +x 00:19:53.085 02:17:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:53.085 02:17:07 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:53.085 02:17:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:53.085 02:17:07 -- common/autotest_common.sh@10 -- # set +x 00:19:53.085 02:17:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:53.085 02:17:07 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:19:53.085 02:17:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:53.085 02:17:07 -- common/autotest_common.sh@10 -- # set +x 00:19:53.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:53.085 02:17:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:53.085 02:17:07 -- host/multicontroller.sh@44 -- # bdevperf_pid=80115 00:19:53.085 02:17:07 -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:19:53.085 02:17:07 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:53.085 02:17:07 -- host/multicontroller.sh@47 -- # waitforlisten 80115 /var/tmp/bdevperf.sock 00:19:53.085 02:17:07 -- common/autotest_common.sh@819 -- # '[' -z 80115 ']' 00:19:53.085 02:17:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:53.085 02:17:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:53.085 02:17:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:53.085 02:17:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:53.085 02:17:07 -- common/autotest_common.sh@10 -- # set +x 00:19:54.021 02:17:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:54.021 02:17:08 -- common/autotest_common.sh@852 -- # return 0 00:19:54.021 02:17:08 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:19:54.021 02:17:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:54.021 02:17:08 -- common/autotest_common.sh@10 -- # set +x 00:19:54.281 NVMe0n1 00:19:54.281 02:17:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:54.281 02:17:08 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:54.281 02:17:08 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:19:54.281 02:17:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:54.281 02:17:08 -- common/autotest_common.sh@10 -- # set +x 00:19:54.281 02:17:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:54.281 1 00:19:54.281 02:17:08 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:19:54.281 02:17:08 -- common/autotest_common.sh@640 -- # local es=0 00:19:54.281 02:17:08 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:19:54.281 02:17:08 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:19:54.281 02:17:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:54.281 02:17:08 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:19:54.281 02:17:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:54.281 02:17:08 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:19:54.281 02:17:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:54.281 02:17:08 -- common/autotest_common.sh@10 -- # set +x 00:19:54.281 2024/05/14 02:17:08 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:19:54.281 request: 00:19:54.281 { 00:19:54.281 "method": "bdev_nvme_attach_controller", 00:19:54.281 "params": { 00:19:54.281 "name": "NVMe0", 00:19:54.281 "trtype": "tcp", 00:19:54.281 "traddr": "10.0.0.2", 00:19:54.281 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:19:54.281 "hostaddr": "10.0.0.2", 00:19:54.281 "hostsvcid": "60000", 00:19:54.281 "adrfam": "ipv4", 00:19:54.281 "trsvcid": "4420", 00:19:54.281 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:19:54.281 } 00:19:54.281 } 00:19:54.281 Got JSON-RPC error response 00:19:54.281 GoRPCClient: error on JSON-RPC call 00:19:54.281 02:17:08 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:19:54.281 02:17:08 -- common/autotest_common.sh@643 -- # es=1 00:19:54.281 02:17:08 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:54.281 02:17:08 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:54.281 02:17:08 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:54.281 02:17:08 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:19:54.281 02:17:08 -- common/autotest_common.sh@640 -- # local es=0 00:19:54.281 02:17:08 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:19:54.281 02:17:08 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:19:54.281 02:17:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:54.281 02:17:08 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:19:54.281 02:17:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:54.281 02:17:08 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:19:54.281 02:17:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:54.281 02:17:08 -- common/autotest_common.sh@10 -- # set +x 00:19:54.281 2024/05/14 02:17:08 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:19:54.281 request: 00:19:54.281 { 00:19:54.281 "method": "bdev_nvme_attach_controller", 00:19:54.281 "params": { 00:19:54.281 "name": "NVMe0", 00:19:54.281 "trtype": "tcp", 00:19:54.281 "traddr": "10.0.0.2", 00:19:54.281 "hostaddr": "10.0.0.2", 00:19:54.281 "hostsvcid": "60000", 00:19:54.281 "adrfam": "ipv4", 00:19:54.281 "trsvcid": "4420", 00:19:54.281 "subnqn": "nqn.2016-06.io.spdk:cnode2" 00:19:54.281 } 00:19:54.281 } 00:19:54.281 Got JSON-RPC error response 00:19:54.281 GoRPCClient: error on JSON-RPC call 00:19:54.281 02:17:08 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:19:54.281 02:17:08 -- common/autotest_common.sh@643 -- # es=1 00:19:54.281 02:17:08 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:54.281 02:17:08 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:54.281 02:17:08 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:54.281 02:17:08 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:19:54.281 02:17:08 -- common/autotest_common.sh@640 -- # local es=0 00:19:54.281 02:17:08 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:19:54.281 02:17:08 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:19:54.281 02:17:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:54.281 02:17:08 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:19:54.281 02:17:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:54.281 02:17:08 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:19:54.281 02:17:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:54.281 02:17:08 -- common/autotest_common.sh@10 -- # set +x 00:19:54.281 2024/05/14 02:17:08 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:19:54.281 request: 00:19:54.281 { 00:19:54.281 "method": "bdev_nvme_attach_controller", 00:19:54.281 "params": { 00:19:54.281 "name": "NVMe0", 00:19:54.281 "trtype": "tcp", 00:19:54.281 "traddr": "10.0.0.2", 00:19:54.281 "hostaddr": "10.0.0.2", 00:19:54.281 "hostsvcid": "60000", 00:19:54.281 "adrfam": "ipv4", 00:19:54.281 "trsvcid": "4420", 00:19:54.281 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:54.281 "multipath": "disable" 00:19:54.281 } 00:19:54.281 } 00:19:54.281 Got JSON-RPC error response 00:19:54.281 GoRPCClient: error on JSON-RPC call 00:19:54.281 02:17:08 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:19:54.281 02:17:08 -- common/autotest_common.sh@643 -- # es=1 00:19:54.281 02:17:08 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:54.281 02:17:08 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:54.281 02:17:08 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:54.281 02:17:08 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:19:54.281 02:17:08 -- common/autotest_common.sh@640 -- # local es=0 00:19:54.281 02:17:08 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:19:54.281 02:17:08 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:19:54.281 02:17:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:54.281 02:17:08 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:19:54.281 02:17:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:54.281 02:17:08 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:19:54.281 02:17:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:54.281 02:17:08 -- common/autotest_common.sh@10 -- # set +x 00:19:54.281 2024/05/14 02:17:08 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:19:54.281 request: 00:19:54.281 { 00:19:54.281 "method": "bdev_nvme_attach_controller", 00:19:54.281 "params": { 00:19:54.281 "name": "NVMe0", 00:19:54.281 "trtype": "tcp", 00:19:54.281 "traddr": "10.0.0.2", 00:19:54.281 "hostaddr": "10.0.0.2", 00:19:54.281 "hostsvcid": "60000", 00:19:54.281 "adrfam": "ipv4", 00:19:54.281 "trsvcid": "4420", 00:19:54.281 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:54.281 "multipath": "failover" 00:19:54.281 } 00:19:54.281 } 00:19:54.281 Got JSON-RPC error response 00:19:54.281 GoRPCClient: error on JSON-RPC call 00:19:54.281 02:17:08 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:19:54.281 02:17:08 -- common/autotest_common.sh@643 -- # es=1 00:19:54.281 02:17:08 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:54.281 02:17:08 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:54.281 02:17:08 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:54.281 02:17:08 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:54.281 02:17:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:54.281 02:17:08 -- common/autotest_common.sh@10 -- # set +x 00:19:54.281 00:19:54.281 02:17:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:54.281 02:17:08 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:54.281 02:17:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:54.281 02:17:08 -- common/autotest_common.sh@10 -- # set +x 00:19:54.281 02:17:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:54.281 02:17:08 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:19:54.281 02:17:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:54.281 02:17:08 -- common/autotest_common.sh@10 -- # set +x 00:19:54.282 00:19:54.282 02:17:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:54.282 02:17:08 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:54.282 02:17:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:54.282 02:17:08 -- common/autotest_common.sh@10 -- # set +x 00:19:54.282 02:17:08 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:19:54.539 02:17:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:54.539 02:17:08 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:19:54.539 02:17:08 -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:55.473 0 00:19:55.473 02:17:10 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:19:55.473 02:17:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:55.473 02:17:10 -- common/autotest_common.sh@10 -- # set +x 00:19:55.473 02:17:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:55.473 02:17:10 -- host/multicontroller.sh@100 -- # killprocess 80115 00:19:55.473 02:17:10 -- common/autotest_common.sh@926 -- # '[' -z 80115 ']' 00:19:55.473 02:17:10 -- common/autotest_common.sh@930 -- # kill -0 80115 00:19:55.473 02:17:10 -- common/autotest_common.sh@931 -- # uname 00:19:55.473 02:17:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:55.473 02:17:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 80115 00:19:55.731 killing process with pid 80115 00:19:55.731 02:17:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:55.731 02:17:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:55.731 02:17:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 80115' 00:19:55.731 02:17:10 -- common/autotest_common.sh@945 -- # kill 80115 00:19:55.731 02:17:10 -- common/autotest_common.sh@950 -- # wait 80115 00:19:55.731 02:17:10 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:55.731 02:17:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:55.731 02:17:10 -- common/autotest_common.sh@10 -- # set +x 00:19:55.731 02:17:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:55.731 02:17:10 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:55.731 02:17:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:55.731 02:17:10 -- common/autotest_common.sh@10 -- # set +x 00:19:55.731 02:17:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:55.731 02:17:10 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:19:55.731 02:17:10 -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:55.731 02:17:10 -- common/autotest_common.sh@1597 -- # read -r file 00:19:55.731 02:17:10 -- common/autotest_common.sh@1596 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:19:55.731 02:17:10 -- common/autotest_common.sh@1596 -- # sort -u 00:19:55.731 02:17:10 -- common/autotest_common.sh@1598 -- # cat 00:19:55.731 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:19:55.731 [2024-05-14 02:17:07.570468] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:55.731 [2024-05-14 02:17:07.570684] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80115 ] 00:19:55.731 [2024-05-14 02:17:07.712544] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.731 [2024-05-14 02:17:07.780442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:55.731 [2024-05-14 02:17:08.847684] bdev.c:4548:bdev_name_add: *ERROR*: Bdev name 85066ac4-c5e5-4d6d-a49b-a3852063f8fe already exists 00:19:55.731 [2024-05-14 02:17:08.847757] bdev.c:7598:bdev_register: *ERROR*: Unable to add uuid:85066ac4-c5e5-4d6d-a49b-a3852063f8fe alias for bdev NVMe1n1 00:19:55.731 [2024-05-14 02:17:08.847794] bdev_nvme.c:4230:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:19:55.732 Running I/O for 1 seconds... 00:19:55.732 00:19:55.732 Latency(us) 00:19:55.732 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:55.732 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:19:55.732 NVMe0n1 : 1.00 19753.03 77.16 0.00 0.00 6471.02 2800.17 13524.25 00:19:55.732 =================================================================================================================== 00:19:55.732 Total : 19753.03 77.16 0.00 0.00 6471.02 2800.17 13524.25 00:19:55.732 Received shutdown signal, test time was about 1.000000 seconds 00:19:55.732 00:19:55.732 Latency(us) 00:19:55.732 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:55.732 =================================================================================================================== 00:19:55.732 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:55.732 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:19:55.732 02:17:10 -- common/autotest_common.sh@1603 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:55.732 02:17:10 -- common/autotest_common.sh@1597 -- # read -r file 00:19:55.732 02:17:10 -- host/multicontroller.sh@108 -- # nvmftestfini 00:19:55.732 02:17:10 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:55.732 02:17:10 -- nvmf/common.sh@116 -- # sync 00:19:55.991 02:17:10 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:55.991 02:17:10 -- nvmf/common.sh@119 -- # set +e 00:19:55.991 02:17:10 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:55.991 02:17:10 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:55.991 rmmod nvme_tcp 00:19:55.991 rmmod nvme_fabrics 00:19:55.991 rmmod nvme_keyring 00:19:55.991 02:17:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:55.991 02:17:10 -- nvmf/common.sh@123 -- # set -e 00:19:55.991 02:17:10 -- nvmf/common.sh@124 -- # return 0 00:19:55.991 02:17:10 -- nvmf/common.sh@477 -- # '[' -n 80063 ']' 00:19:55.991 02:17:10 -- nvmf/common.sh@478 -- # killprocess 80063 00:19:55.991 02:17:10 -- common/autotest_common.sh@926 -- # '[' -z 80063 ']' 00:19:55.991 02:17:10 -- common/autotest_common.sh@930 -- # kill -0 80063 00:19:55.991 02:17:10 -- common/autotest_common.sh@931 -- # uname 00:19:55.991 02:17:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:55.991 02:17:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 80063 00:19:55.991 killing process with pid 80063 00:19:55.991 02:17:10 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:19:55.991 02:17:10 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:19:55.991 02:17:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 80063' 00:19:55.991 02:17:10 -- common/autotest_common.sh@945 -- # kill 80063 00:19:55.991 02:17:10 -- common/autotest_common.sh@950 -- # wait 80063 00:19:56.249 02:17:10 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:56.249 02:17:10 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:56.249 02:17:10 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:56.249 02:17:10 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:56.249 02:17:10 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:56.249 02:17:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:56.249 02:17:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:56.249 02:17:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:56.249 02:17:10 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:56.249 00:19:56.249 real 0m4.828s 00:19:56.249 user 0m15.498s 00:19:56.249 sys 0m0.900s 00:19:56.249 02:17:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:56.249 ************************************ 00:19:56.249 END TEST nvmf_multicontroller 00:19:56.249 ************************************ 00:19:56.249 02:17:10 -- common/autotest_common.sh@10 -- # set +x 00:19:56.249 02:17:10 -- nvmf/nvmf.sh@91 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:19:56.249 02:17:10 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:56.249 02:17:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:56.249 02:17:10 -- common/autotest_common.sh@10 -- # set +x 00:19:56.249 ************************************ 00:19:56.249 START TEST nvmf_aer 00:19:56.249 ************************************ 00:19:56.249 02:17:10 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:19:56.249 * Looking for test storage... 00:19:56.249 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:56.249 02:17:10 -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:56.249 02:17:10 -- nvmf/common.sh@7 -- # uname -s 00:19:56.249 02:17:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:56.249 02:17:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:56.249 02:17:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:56.249 02:17:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:56.249 02:17:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:56.249 02:17:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:56.249 02:17:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:56.249 02:17:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:56.249 02:17:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:56.249 02:17:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:56.250 02:17:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:19:56.250 02:17:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:19:56.250 02:17:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:56.250 02:17:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:56.250 02:17:10 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:56.250 02:17:10 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:56.250 02:17:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:56.250 02:17:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:56.250 02:17:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:56.250 02:17:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.250 02:17:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.250 02:17:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.250 02:17:10 -- paths/export.sh@5 -- # export PATH 00:19:56.250 02:17:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.250 02:17:10 -- nvmf/common.sh@46 -- # : 0 00:19:56.250 02:17:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:56.250 02:17:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:56.250 02:17:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:56.250 02:17:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:56.250 02:17:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:56.250 02:17:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:56.250 02:17:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:56.250 02:17:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:56.250 02:17:10 -- host/aer.sh@11 -- # nvmftestinit 00:19:56.250 02:17:10 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:56.250 02:17:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:56.250 02:17:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:56.250 02:17:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:56.250 02:17:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:56.250 02:17:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:56.250 02:17:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:56.250 02:17:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:56.250 02:17:10 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:56.250 02:17:10 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:56.250 02:17:10 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:56.250 02:17:10 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:56.250 02:17:10 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:56.250 02:17:10 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:56.250 02:17:10 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:56.250 02:17:10 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:56.250 02:17:10 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:56.250 02:17:10 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:56.250 02:17:10 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:56.250 02:17:10 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:56.250 02:17:10 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:56.250 02:17:10 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:56.250 02:17:10 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:56.250 02:17:10 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:56.250 02:17:10 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:56.250 02:17:10 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:56.250 02:17:10 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:56.250 02:17:10 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:56.250 Cannot find device "nvmf_tgt_br" 00:19:56.250 02:17:10 -- nvmf/common.sh@154 -- # true 00:19:56.250 02:17:10 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:56.509 Cannot find device "nvmf_tgt_br2" 00:19:56.509 02:17:10 -- nvmf/common.sh@155 -- # true 00:19:56.509 02:17:10 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:56.509 02:17:10 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:56.509 Cannot find device "nvmf_tgt_br" 00:19:56.509 02:17:10 -- nvmf/common.sh@157 -- # true 00:19:56.509 02:17:10 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:56.509 Cannot find device "nvmf_tgt_br2" 00:19:56.509 02:17:10 -- nvmf/common.sh@158 -- # true 00:19:56.509 02:17:10 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:56.509 02:17:10 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:56.509 02:17:10 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:56.509 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:56.509 02:17:10 -- nvmf/common.sh@161 -- # true 00:19:56.509 02:17:10 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:56.509 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:56.509 02:17:10 -- nvmf/common.sh@162 -- # true 00:19:56.509 02:17:10 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:56.509 02:17:10 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:56.509 02:17:10 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:56.509 02:17:10 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:56.509 02:17:10 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:56.509 02:17:10 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:56.509 02:17:10 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:56.509 02:17:10 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:56.509 02:17:10 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:56.509 02:17:10 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:56.509 02:17:10 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:56.509 02:17:11 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:56.509 02:17:11 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:56.509 02:17:11 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:56.509 02:17:11 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:56.509 02:17:11 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:56.509 02:17:11 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:56.509 02:17:11 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:56.509 02:17:11 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:56.509 02:17:11 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:56.509 02:17:11 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:56.509 02:17:11 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:56.509 02:17:11 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:56.509 02:17:11 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:56.509 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:56.509 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.100 ms 00:19:56.509 00:19:56.509 --- 10.0.0.2 ping statistics --- 00:19:56.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:56.509 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:19:56.509 02:17:11 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:56.768 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:56.768 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:19:56.768 00:19:56.768 --- 10.0.0.3 ping statistics --- 00:19:56.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:56.768 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:19:56.768 02:17:11 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:56.768 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:56.768 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:19:56.768 00:19:56.768 --- 10.0.0.1 ping statistics --- 00:19:56.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:56.768 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:19:56.768 02:17:11 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:56.768 02:17:11 -- nvmf/common.sh@421 -- # return 0 00:19:56.768 02:17:11 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:56.768 02:17:11 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:56.768 02:17:11 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:56.768 02:17:11 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:56.768 02:17:11 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:56.768 02:17:11 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:56.768 02:17:11 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:56.768 02:17:11 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:19:56.768 02:17:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:56.768 02:17:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:56.768 02:17:11 -- common/autotest_common.sh@10 -- # set +x 00:19:56.768 02:17:11 -- nvmf/common.sh@469 -- # nvmfpid=80367 00:19:56.768 02:17:11 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:56.768 02:17:11 -- nvmf/common.sh@470 -- # waitforlisten 80367 00:19:56.768 02:17:11 -- common/autotest_common.sh@819 -- # '[' -z 80367 ']' 00:19:56.768 02:17:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:56.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:56.768 02:17:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:56.768 02:17:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:56.768 02:17:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:56.768 02:17:11 -- common/autotest_common.sh@10 -- # set +x 00:19:56.768 [2024-05-14 02:17:11.177719] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:56.768 [2024-05-14 02:17:11.177866] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:56.768 [2024-05-14 02:17:11.316196] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:57.027 [2024-05-14 02:17:11.384271] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:57.027 [2024-05-14 02:17:11.384440] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:57.027 [2024-05-14 02:17:11.384456] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:57.027 [2024-05-14 02:17:11.384467] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:57.027 [2024-05-14 02:17:11.384597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:57.027 [2024-05-14 02:17:11.384925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:57.027 [2024-05-14 02:17:11.385000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:57.027 [2024-05-14 02:17:11.385003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:57.595 02:17:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:57.595 02:17:12 -- common/autotest_common.sh@852 -- # return 0 00:19:57.595 02:17:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:57.595 02:17:12 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:57.595 02:17:12 -- common/autotest_common.sh@10 -- # set +x 00:19:57.595 02:17:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:57.595 02:17:12 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:57.595 02:17:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:57.595 02:17:12 -- common/autotest_common.sh@10 -- # set +x 00:19:57.855 [2024-05-14 02:17:12.186633] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:57.855 02:17:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:57.855 02:17:12 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:19:57.855 02:17:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:57.855 02:17:12 -- common/autotest_common.sh@10 -- # set +x 00:19:57.855 Malloc0 00:19:57.855 02:17:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:57.855 02:17:12 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:19:57.855 02:17:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:57.855 02:17:12 -- common/autotest_common.sh@10 -- # set +x 00:19:57.855 02:17:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:57.855 02:17:12 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:57.855 02:17:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:57.855 02:17:12 -- common/autotest_common.sh@10 -- # set +x 00:19:57.855 02:17:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:57.855 02:17:12 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:57.855 02:17:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:57.855 02:17:12 -- common/autotest_common.sh@10 -- # set +x 00:19:57.855 [2024-05-14 02:17:12.247966] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:57.855 02:17:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:57.855 02:17:12 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:19:57.855 02:17:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:57.855 02:17:12 -- common/autotest_common.sh@10 -- # set +x 00:19:57.855 [2024-05-14 02:17:12.255711] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:19:57.855 [ 00:19:57.855 { 00:19:57.855 "allow_any_host": true, 00:19:57.855 "hosts": [], 00:19:57.855 "listen_addresses": [], 00:19:57.855 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:57.855 "subtype": "Discovery" 00:19:57.855 }, 00:19:57.855 { 00:19:57.855 "allow_any_host": true, 00:19:57.855 "hosts": [], 00:19:57.855 "listen_addresses": [ 00:19:57.855 { 00:19:57.855 "adrfam": "IPv4", 00:19:57.855 "traddr": "10.0.0.2", 00:19:57.855 "transport": "TCP", 00:19:57.855 "trsvcid": "4420", 00:19:57.855 "trtype": "TCP" 00:19:57.855 } 00:19:57.855 ], 00:19:57.855 "max_cntlid": 65519, 00:19:57.855 "max_namespaces": 2, 00:19:57.855 "min_cntlid": 1, 00:19:57.855 "model_number": "SPDK bdev Controller", 00:19:57.855 "namespaces": [ 00:19:57.855 { 00:19:57.855 "bdev_name": "Malloc0", 00:19:57.855 "name": "Malloc0", 00:19:57.855 "nguid": "2A8BD4FF9C90414AA76A71DF85AE0E18", 00:19:57.855 "nsid": 1, 00:19:57.855 "uuid": "2a8bd4ff-9c90-414a-a76a-71df85ae0e18" 00:19:57.855 } 00:19:57.855 ], 00:19:57.855 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:57.855 "serial_number": "SPDK00000000000001", 00:19:57.855 "subtype": "NVMe" 00:19:57.855 } 00:19:57.855 ] 00:19:57.855 02:17:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:57.855 02:17:12 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:19:57.855 02:17:12 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:19:57.855 02:17:12 -- host/aer.sh@33 -- # aerpid=80421 00:19:57.855 02:17:12 -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:19:57.855 02:17:12 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:19:57.855 02:17:12 -- common/autotest_common.sh@1244 -- # local i=0 00:19:57.855 02:17:12 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:57.855 02:17:12 -- common/autotest_common.sh@1246 -- # '[' 0 -lt 200 ']' 00:19:57.855 02:17:12 -- common/autotest_common.sh@1247 -- # i=1 00:19:57.855 02:17:12 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:19:57.855 02:17:12 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:57.855 02:17:12 -- common/autotest_common.sh@1246 -- # '[' 1 -lt 200 ']' 00:19:57.855 02:17:12 -- common/autotest_common.sh@1247 -- # i=2 00:19:57.855 02:17:12 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:19:58.115 02:17:12 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:58.115 02:17:12 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:58.115 02:17:12 -- common/autotest_common.sh@1255 -- # return 0 00:19:58.115 02:17:12 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:19:58.115 02:17:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:58.115 02:17:12 -- common/autotest_common.sh@10 -- # set +x 00:19:58.115 Malloc1 00:19:58.115 02:17:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:58.115 02:17:12 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:19:58.115 02:17:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:58.115 02:17:12 -- common/autotest_common.sh@10 -- # set +x 00:19:58.115 02:17:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:58.115 02:17:12 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:19:58.115 02:17:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:58.115 02:17:12 -- common/autotest_common.sh@10 -- # set +x 00:19:58.115 [ 00:19:58.115 { 00:19:58.115 "allow_any_host": true, 00:19:58.115 "hosts": [], 00:19:58.115 "listen_addresses": [], 00:19:58.115 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:58.115 "subtype": "Discovery" 00:19:58.115 }, 00:19:58.115 { 00:19:58.115 "allow_any_host": true, 00:19:58.115 "hosts": [], 00:19:58.115 "listen_addresses": [ 00:19:58.115 { 00:19:58.115 "adrfam": "IPv4", 00:19:58.115 "traddr": "10.0.0.2", 00:19:58.115 "transport": "TCP", 00:19:58.115 "trsvcid": "4420", 00:19:58.115 Asynchronous Event Request test 00:19:58.115 Attaching to 10.0.0.2 00:19:58.115 Attached to 10.0.0.2 00:19:58.115 Registering asynchronous event callbacks... 00:19:58.115 Starting namespace attribute notice tests for all controllers... 00:19:58.115 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:19:58.115 aer_cb - Changed Namespace 00:19:58.115 Cleaning up... 00:19:58.115 "trtype": "TCP" 00:19:58.115 } 00:19:58.115 ], 00:19:58.115 "max_cntlid": 65519, 00:19:58.115 "max_namespaces": 2, 00:19:58.115 "min_cntlid": 1, 00:19:58.115 "model_number": "SPDK bdev Controller", 00:19:58.115 "namespaces": [ 00:19:58.115 { 00:19:58.115 "bdev_name": "Malloc0", 00:19:58.115 "name": "Malloc0", 00:19:58.115 "nguid": "2A8BD4FF9C90414AA76A71DF85AE0E18", 00:19:58.115 "nsid": 1, 00:19:58.115 "uuid": "2a8bd4ff-9c90-414a-a76a-71df85ae0e18" 00:19:58.115 }, 00:19:58.115 { 00:19:58.115 "bdev_name": "Malloc1", 00:19:58.115 "name": "Malloc1", 00:19:58.115 "nguid": "2A7F84A632684D78876A9278368B4D97", 00:19:58.115 "nsid": 2, 00:19:58.115 "uuid": "2a7f84a6-3268-4d78-876a-9278368b4d97" 00:19:58.115 } 00:19:58.115 ], 00:19:58.115 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:58.115 "serial_number": "SPDK00000000000001", 00:19:58.115 "subtype": "NVMe" 00:19:58.115 } 00:19:58.115 ] 00:19:58.115 02:17:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:58.115 02:17:12 -- host/aer.sh@43 -- # wait 80421 00:19:58.115 02:17:12 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:19:58.115 02:17:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:58.115 02:17:12 -- common/autotest_common.sh@10 -- # set +x 00:19:58.115 02:17:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:58.115 02:17:12 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:19:58.115 02:17:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:58.115 02:17:12 -- common/autotest_common.sh@10 -- # set +x 00:19:58.115 02:17:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:58.115 02:17:12 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:58.115 02:17:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:58.115 02:17:12 -- common/autotest_common.sh@10 -- # set +x 00:19:58.115 02:17:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:58.115 02:17:12 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:19:58.115 02:17:12 -- host/aer.sh@51 -- # nvmftestfini 00:19:58.115 02:17:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:58.115 02:17:12 -- nvmf/common.sh@116 -- # sync 00:19:58.115 02:17:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:58.115 02:17:12 -- nvmf/common.sh@119 -- # set +e 00:19:58.115 02:17:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:58.115 02:17:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:58.115 rmmod nvme_tcp 00:19:58.115 rmmod nvme_fabrics 00:19:58.115 rmmod nvme_keyring 00:19:58.115 02:17:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:58.115 02:17:12 -- nvmf/common.sh@123 -- # set -e 00:19:58.115 02:17:12 -- nvmf/common.sh@124 -- # return 0 00:19:58.115 02:17:12 -- nvmf/common.sh@477 -- # '[' -n 80367 ']' 00:19:58.115 02:17:12 -- nvmf/common.sh@478 -- # killprocess 80367 00:19:58.115 02:17:12 -- common/autotest_common.sh@926 -- # '[' -z 80367 ']' 00:19:58.115 02:17:12 -- common/autotest_common.sh@930 -- # kill -0 80367 00:19:58.115 02:17:12 -- common/autotest_common.sh@931 -- # uname 00:19:58.115 02:17:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:58.115 02:17:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 80367 00:19:58.374 02:17:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:58.374 killing process with pid 80367 00:19:58.374 02:17:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:58.374 02:17:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 80367' 00:19:58.374 02:17:12 -- common/autotest_common.sh@945 -- # kill 80367 00:19:58.374 [2024-05-14 02:17:12.718388] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:19:58.374 02:17:12 -- common/autotest_common.sh@950 -- # wait 80367 00:19:58.374 02:17:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:58.374 02:17:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:58.374 02:17:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:58.374 02:17:12 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:58.374 02:17:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:58.374 02:17:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:58.374 02:17:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:58.374 02:17:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:58.374 02:17:12 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:58.633 00:19:58.633 real 0m2.264s 00:19:58.633 user 0m6.350s 00:19:58.633 sys 0m0.540s 00:19:58.633 02:17:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:58.633 02:17:12 -- common/autotest_common.sh@10 -- # set +x 00:19:58.633 ************************************ 00:19:58.633 END TEST nvmf_aer 00:19:58.633 ************************************ 00:19:58.633 02:17:12 -- nvmf/nvmf.sh@92 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:19:58.633 02:17:12 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:58.633 02:17:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:58.633 02:17:12 -- common/autotest_common.sh@10 -- # set +x 00:19:58.633 ************************************ 00:19:58.633 START TEST nvmf_async_init 00:19:58.633 ************************************ 00:19:58.633 02:17:13 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:19:58.633 * Looking for test storage... 00:19:58.633 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:58.633 02:17:13 -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:58.633 02:17:13 -- nvmf/common.sh@7 -- # uname -s 00:19:58.633 02:17:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:58.633 02:17:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:58.633 02:17:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:58.633 02:17:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:58.633 02:17:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:58.633 02:17:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:58.633 02:17:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:58.633 02:17:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:58.633 02:17:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:58.633 02:17:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:58.633 02:17:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:19:58.633 02:17:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:19:58.633 02:17:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:58.633 02:17:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:58.633 02:17:13 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:58.633 02:17:13 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:58.633 02:17:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:58.633 02:17:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:58.633 02:17:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:58.633 02:17:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.633 02:17:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.633 02:17:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.633 02:17:13 -- paths/export.sh@5 -- # export PATH 00:19:58.633 02:17:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.633 02:17:13 -- nvmf/common.sh@46 -- # : 0 00:19:58.633 02:17:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:58.633 02:17:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:58.634 02:17:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:58.634 02:17:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:58.634 02:17:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:58.634 02:17:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:58.634 02:17:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:58.634 02:17:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:58.634 02:17:13 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:19:58.634 02:17:13 -- host/async_init.sh@14 -- # null_block_size=512 00:19:58.634 02:17:13 -- host/async_init.sh@15 -- # null_bdev=null0 00:19:58.634 02:17:13 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:19:58.634 02:17:13 -- host/async_init.sh@20 -- # uuidgen 00:19:58.634 02:17:13 -- host/async_init.sh@20 -- # tr -d - 00:19:58.634 02:17:13 -- host/async_init.sh@20 -- # nguid=b3bf0ca4e356424eb15f946bea3859bd 00:19:58.634 02:17:13 -- host/async_init.sh@22 -- # nvmftestinit 00:19:58.634 02:17:13 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:58.634 02:17:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:58.634 02:17:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:58.634 02:17:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:58.634 02:17:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:58.634 02:17:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:58.634 02:17:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:58.634 02:17:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:58.634 02:17:13 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:58.634 02:17:13 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:58.634 02:17:13 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:58.634 02:17:13 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:58.634 02:17:13 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:58.634 02:17:13 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:58.634 02:17:13 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:58.634 02:17:13 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:58.634 02:17:13 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:58.634 02:17:13 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:58.634 02:17:13 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:58.634 02:17:13 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:58.634 02:17:13 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:58.634 02:17:13 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:58.634 02:17:13 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:58.634 02:17:13 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:58.634 02:17:13 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:58.634 02:17:13 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:58.634 02:17:13 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:58.634 02:17:13 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:58.634 Cannot find device "nvmf_tgt_br" 00:19:58.634 02:17:13 -- nvmf/common.sh@154 -- # true 00:19:58.634 02:17:13 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:58.634 Cannot find device "nvmf_tgt_br2" 00:19:58.634 02:17:13 -- nvmf/common.sh@155 -- # true 00:19:58.634 02:17:13 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:58.634 02:17:13 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:58.634 Cannot find device "nvmf_tgt_br" 00:19:58.634 02:17:13 -- nvmf/common.sh@157 -- # true 00:19:58.634 02:17:13 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:58.634 Cannot find device "nvmf_tgt_br2" 00:19:58.634 02:17:13 -- nvmf/common.sh@158 -- # true 00:19:58.634 02:17:13 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:58.893 02:17:13 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:58.893 02:17:13 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:58.893 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:58.893 02:17:13 -- nvmf/common.sh@161 -- # true 00:19:58.893 02:17:13 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:58.893 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:58.893 02:17:13 -- nvmf/common.sh@162 -- # true 00:19:58.893 02:17:13 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:58.893 02:17:13 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:58.893 02:17:13 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:58.893 02:17:13 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:58.893 02:17:13 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:58.893 02:17:13 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:58.893 02:17:13 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:58.893 02:17:13 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:58.893 02:17:13 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:58.893 02:17:13 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:58.893 02:17:13 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:58.893 02:17:13 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:58.893 02:17:13 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:58.893 02:17:13 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:58.893 02:17:13 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:58.893 02:17:13 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:58.893 02:17:13 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:58.893 02:17:13 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:58.893 02:17:13 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:58.893 02:17:13 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:58.893 02:17:13 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:58.893 02:17:13 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:58.893 02:17:13 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:58.893 02:17:13 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:58.893 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:58.893 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:19:58.893 00:19:58.893 --- 10.0.0.2 ping statistics --- 00:19:58.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:58.893 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:19:58.893 02:17:13 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:58.893 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:58.893 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.036 ms 00:19:58.893 00:19:58.893 --- 10.0.0.3 ping statistics --- 00:19:58.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:58.893 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:19:58.893 02:17:13 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:58.893 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:58.893 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:19:58.893 00:19:58.893 --- 10.0.0.1 ping statistics --- 00:19:58.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:58.893 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:19:58.893 02:17:13 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:58.893 02:17:13 -- nvmf/common.sh@421 -- # return 0 00:19:58.893 02:17:13 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:58.893 02:17:13 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:58.893 02:17:13 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:58.893 02:17:13 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:58.893 02:17:13 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:58.893 02:17:13 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:58.893 02:17:13 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:58.893 02:17:13 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:19:58.893 02:17:13 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:58.893 02:17:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:58.893 02:17:13 -- common/autotest_common.sh@10 -- # set +x 00:19:58.893 02:17:13 -- nvmf/common.sh@469 -- # nvmfpid=80590 00:19:58.893 02:17:13 -- nvmf/common.sh@470 -- # waitforlisten 80590 00:19:58.893 02:17:13 -- common/autotest_common.sh@819 -- # '[' -z 80590 ']' 00:19:58.893 02:17:13 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:58.893 02:17:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:58.893 02:17:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:58.893 02:17:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:58.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:58.893 02:17:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:58.893 02:17:13 -- common/autotest_common.sh@10 -- # set +x 00:19:59.151 [2024-05-14 02:17:13.531162] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:59.151 [2024-05-14 02:17:13.531279] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:59.151 [2024-05-14 02:17:13.674124] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.409 [2024-05-14 02:17:13.747679] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:59.409 [2024-05-14 02:17:13.747911] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:59.409 [2024-05-14 02:17:13.747930] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:59.409 [2024-05-14 02:17:13.747952] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:59.409 [2024-05-14 02:17:13.747987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:59.976 02:17:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:59.976 02:17:14 -- common/autotest_common.sh@852 -- # return 0 00:19:59.976 02:17:14 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:59.976 02:17:14 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:59.976 02:17:14 -- common/autotest_common.sh@10 -- # set +x 00:19:59.976 02:17:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:59.976 02:17:14 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:19:59.976 02:17:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:59.976 02:17:14 -- common/autotest_common.sh@10 -- # set +x 00:19:59.976 [2024-05-14 02:17:14.537208] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:59.976 02:17:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:59.976 02:17:14 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:19:59.976 02:17:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:59.976 02:17:14 -- common/autotest_common.sh@10 -- # set +x 00:19:59.976 null0 00:19:59.976 02:17:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:59.976 02:17:14 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:19:59.976 02:17:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:59.976 02:17:14 -- common/autotest_common.sh@10 -- # set +x 00:19:59.976 02:17:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:59.976 02:17:14 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:19:59.976 02:17:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:59.976 02:17:14 -- common/autotest_common.sh@10 -- # set +x 00:20:00.235 02:17:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:00.236 02:17:14 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g b3bf0ca4e356424eb15f946bea3859bd 00:20:00.236 02:17:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:00.236 02:17:14 -- common/autotest_common.sh@10 -- # set +x 00:20:00.236 02:17:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:00.236 02:17:14 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:00.236 02:17:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:00.236 02:17:14 -- common/autotest_common.sh@10 -- # set +x 00:20:00.236 [2024-05-14 02:17:14.577335] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:00.236 02:17:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:00.236 02:17:14 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:20:00.236 02:17:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:00.236 02:17:14 -- common/autotest_common.sh@10 -- # set +x 00:20:00.236 nvme0n1 00:20:00.236 02:17:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:00.236 02:17:14 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:00.236 02:17:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:00.236 02:17:14 -- common/autotest_common.sh@10 -- # set +x 00:20:00.236 [ 00:20:00.236 { 00:20:00.236 "aliases": [ 00:20:00.236 "b3bf0ca4-e356-424e-b15f-946bea3859bd" 00:20:00.236 ], 00:20:00.236 "assigned_rate_limits": { 00:20:00.236 "r_mbytes_per_sec": 0, 00:20:00.236 "rw_ios_per_sec": 0, 00:20:00.236 "rw_mbytes_per_sec": 0, 00:20:00.236 "w_mbytes_per_sec": 0 00:20:00.236 }, 00:20:00.236 "block_size": 512, 00:20:00.236 "claimed": false, 00:20:00.236 "driver_specific": { 00:20:00.236 "mp_policy": "active_passive", 00:20:00.236 "nvme": [ 00:20:00.236 { 00:20:00.236 "ctrlr_data": { 00:20:00.236 "ana_reporting": false, 00:20:00.236 "cntlid": 1, 00:20:00.236 "firmware_revision": "24.01.1", 00:20:00.495 "model_number": "SPDK bdev Controller", 00:20:00.495 "multi_ctrlr": true, 00:20:00.495 "oacs": { 00:20:00.495 "firmware": 0, 00:20:00.495 "format": 0, 00:20:00.495 "ns_manage": 0, 00:20:00.495 "security": 0 00:20:00.495 }, 00:20:00.495 "serial_number": "00000000000000000000", 00:20:00.495 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:00.495 "vendor_id": "0x8086" 00:20:00.495 }, 00:20:00.495 "ns_data": { 00:20:00.495 "can_share": true, 00:20:00.495 "id": 1 00:20:00.495 }, 00:20:00.495 "trid": { 00:20:00.495 "adrfam": "IPv4", 00:20:00.495 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:00.495 "traddr": "10.0.0.2", 00:20:00.495 "trsvcid": "4420", 00:20:00.495 "trtype": "TCP" 00:20:00.495 }, 00:20:00.495 "vs": { 00:20:00.495 "nvme_version": "1.3" 00:20:00.495 } 00:20:00.495 } 00:20:00.495 ] 00:20:00.495 }, 00:20:00.495 "name": "nvme0n1", 00:20:00.495 "num_blocks": 2097152, 00:20:00.495 "product_name": "NVMe disk", 00:20:00.495 "supported_io_types": { 00:20:00.495 "abort": true, 00:20:00.495 "compare": true, 00:20:00.495 "compare_and_write": true, 00:20:00.495 "flush": true, 00:20:00.495 "nvme_admin": true, 00:20:00.495 "nvme_io": true, 00:20:00.495 "read": true, 00:20:00.495 "reset": true, 00:20:00.495 "unmap": false, 00:20:00.495 "write": true, 00:20:00.495 "write_zeroes": true 00:20:00.495 }, 00:20:00.495 "uuid": "b3bf0ca4-e356-424e-b15f-946bea3859bd", 00:20:00.495 "zoned": false 00:20:00.495 } 00:20:00.495 ] 00:20:00.495 02:17:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:00.495 02:17:14 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:20:00.495 02:17:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:00.496 02:17:14 -- common/autotest_common.sh@10 -- # set +x 00:20:00.496 [2024-05-14 02:17:14.842158] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:00.496 [2024-05-14 02:17:14.842273] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7547a0 (9): Bad file descriptor 00:20:00.496 [2024-05-14 02:17:14.974005] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:00.496 02:17:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:00.496 02:17:14 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:00.496 02:17:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:00.496 02:17:14 -- common/autotest_common.sh@10 -- # set +x 00:20:00.496 [ 00:20:00.496 { 00:20:00.496 "aliases": [ 00:20:00.496 "b3bf0ca4-e356-424e-b15f-946bea3859bd" 00:20:00.496 ], 00:20:00.496 "assigned_rate_limits": { 00:20:00.496 "r_mbytes_per_sec": 0, 00:20:00.496 "rw_ios_per_sec": 0, 00:20:00.496 "rw_mbytes_per_sec": 0, 00:20:00.496 "w_mbytes_per_sec": 0 00:20:00.496 }, 00:20:00.496 "block_size": 512, 00:20:00.496 "claimed": false, 00:20:00.496 "driver_specific": { 00:20:00.496 "mp_policy": "active_passive", 00:20:00.496 "nvme": [ 00:20:00.496 { 00:20:00.496 "ctrlr_data": { 00:20:00.496 "ana_reporting": false, 00:20:00.496 "cntlid": 2, 00:20:00.496 "firmware_revision": "24.01.1", 00:20:00.496 "model_number": "SPDK bdev Controller", 00:20:00.496 "multi_ctrlr": true, 00:20:00.496 "oacs": { 00:20:00.496 "firmware": 0, 00:20:00.496 "format": 0, 00:20:00.496 "ns_manage": 0, 00:20:00.496 "security": 0 00:20:00.496 }, 00:20:00.496 "serial_number": "00000000000000000000", 00:20:00.496 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:00.496 "vendor_id": "0x8086" 00:20:00.496 }, 00:20:00.496 "ns_data": { 00:20:00.496 "can_share": true, 00:20:00.496 "id": 1 00:20:00.496 }, 00:20:00.496 "trid": { 00:20:00.496 "adrfam": "IPv4", 00:20:00.496 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:00.496 "traddr": "10.0.0.2", 00:20:00.496 "trsvcid": "4420", 00:20:00.496 "trtype": "TCP" 00:20:00.496 }, 00:20:00.496 "vs": { 00:20:00.496 "nvme_version": "1.3" 00:20:00.496 } 00:20:00.496 } 00:20:00.496 ] 00:20:00.496 }, 00:20:00.496 "name": "nvme0n1", 00:20:00.496 "num_blocks": 2097152, 00:20:00.496 "product_name": "NVMe disk", 00:20:00.496 "supported_io_types": { 00:20:00.496 "abort": true, 00:20:00.496 "compare": true, 00:20:00.496 "compare_and_write": true, 00:20:00.496 "flush": true, 00:20:00.496 "nvme_admin": true, 00:20:00.496 "nvme_io": true, 00:20:00.496 "read": true, 00:20:00.496 "reset": true, 00:20:00.496 "unmap": false, 00:20:00.496 "write": true, 00:20:00.496 "write_zeroes": true 00:20:00.496 }, 00:20:00.496 "uuid": "b3bf0ca4-e356-424e-b15f-946bea3859bd", 00:20:00.496 "zoned": false 00:20:00.496 } 00:20:00.496 ] 00:20:00.496 02:17:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:00.496 02:17:15 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:00.496 02:17:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:00.496 02:17:15 -- common/autotest_common.sh@10 -- # set +x 00:20:00.496 02:17:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:00.496 02:17:15 -- host/async_init.sh@53 -- # mktemp 00:20:00.496 02:17:15 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.mDSWz4N8Sx 00:20:00.496 02:17:15 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:00.496 02:17:15 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.mDSWz4N8Sx 00:20:00.496 02:17:15 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:20:00.496 02:17:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:00.496 02:17:15 -- common/autotest_common.sh@10 -- # set +x 00:20:00.496 02:17:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:00.496 02:17:15 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:20:00.496 02:17:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:00.496 02:17:15 -- common/autotest_common.sh@10 -- # set +x 00:20:00.496 [2024-05-14 02:17:15.042432] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:00.496 [2024-05-14 02:17:15.042605] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:00.496 02:17:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:00.496 02:17:15 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.mDSWz4N8Sx 00:20:00.496 02:17:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:00.496 02:17:15 -- common/autotest_common.sh@10 -- # set +x 00:20:00.496 02:17:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:00.496 02:17:15 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.mDSWz4N8Sx 00:20:00.496 02:17:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:00.496 02:17:15 -- common/autotest_common.sh@10 -- # set +x 00:20:00.496 [2024-05-14 02:17:15.058419] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:00.755 nvme0n1 00:20:00.755 02:17:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:00.755 02:17:15 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:00.755 02:17:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:00.755 02:17:15 -- common/autotest_common.sh@10 -- # set +x 00:20:00.755 [ 00:20:00.755 { 00:20:00.755 "aliases": [ 00:20:00.755 "b3bf0ca4-e356-424e-b15f-946bea3859bd" 00:20:00.755 ], 00:20:00.755 "assigned_rate_limits": { 00:20:00.755 "r_mbytes_per_sec": 0, 00:20:00.755 "rw_ios_per_sec": 0, 00:20:00.755 "rw_mbytes_per_sec": 0, 00:20:00.755 "w_mbytes_per_sec": 0 00:20:00.755 }, 00:20:00.755 "block_size": 512, 00:20:00.755 "claimed": false, 00:20:00.755 "driver_specific": { 00:20:00.755 "mp_policy": "active_passive", 00:20:00.755 "nvme": [ 00:20:00.755 { 00:20:00.755 "ctrlr_data": { 00:20:00.755 "ana_reporting": false, 00:20:00.755 "cntlid": 3, 00:20:00.755 "firmware_revision": "24.01.1", 00:20:00.755 "model_number": "SPDK bdev Controller", 00:20:00.755 "multi_ctrlr": true, 00:20:00.755 "oacs": { 00:20:00.755 "firmware": 0, 00:20:00.755 "format": 0, 00:20:00.755 "ns_manage": 0, 00:20:00.755 "security": 0 00:20:00.755 }, 00:20:00.755 "serial_number": "00000000000000000000", 00:20:00.755 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:00.755 "vendor_id": "0x8086" 00:20:00.755 }, 00:20:00.755 "ns_data": { 00:20:00.755 "can_share": true, 00:20:00.755 "id": 1 00:20:00.755 }, 00:20:00.755 "trid": { 00:20:00.755 "adrfam": "IPv4", 00:20:00.755 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:00.755 "traddr": "10.0.0.2", 00:20:00.755 "trsvcid": "4421", 00:20:00.755 "trtype": "TCP" 00:20:00.755 }, 00:20:00.755 "vs": { 00:20:00.755 "nvme_version": "1.3" 00:20:00.755 } 00:20:00.755 } 00:20:00.755 ] 00:20:00.755 }, 00:20:00.755 "name": "nvme0n1", 00:20:00.755 "num_blocks": 2097152, 00:20:00.755 "product_name": "NVMe disk", 00:20:00.755 "supported_io_types": { 00:20:00.755 "abort": true, 00:20:00.755 "compare": true, 00:20:00.755 "compare_and_write": true, 00:20:00.755 "flush": true, 00:20:00.755 "nvme_admin": true, 00:20:00.755 "nvme_io": true, 00:20:00.755 "read": true, 00:20:00.755 "reset": true, 00:20:00.755 "unmap": false, 00:20:00.755 "write": true, 00:20:00.755 "write_zeroes": true 00:20:00.755 }, 00:20:00.755 "uuid": "b3bf0ca4-e356-424e-b15f-946bea3859bd", 00:20:00.755 "zoned": false 00:20:00.755 } 00:20:00.755 ] 00:20:00.755 02:17:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:00.755 02:17:15 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:00.755 02:17:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:00.755 02:17:15 -- common/autotest_common.sh@10 -- # set +x 00:20:00.755 02:17:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:00.755 02:17:15 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.mDSWz4N8Sx 00:20:00.755 02:17:15 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:20:00.755 02:17:15 -- host/async_init.sh@78 -- # nvmftestfini 00:20:00.755 02:17:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:00.755 02:17:15 -- nvmf/common.sh@116 -- # sync 00:20:00.755 02:17:15 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:00.755 02:17:15 -- nvmf/common.sh@119 -- # set +e 00:20:00.755 02:17:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:00.755 02:17:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:00.755 rmmod nvme_tcp 00:20:00.755 rmmod nvme_fabrics 00:20:00.755 rmmod nvme_keyring 00:20:00.755 02:17:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:00.755 02:17:15 -- nvmf/common.sh@123 -- # set -e 00:20:00.755 02:17:15 -- nvmf/common.sh@124 -- # return 0 00:20:00.755 02:17:15 -- nvmf/common.sh@477 -- # '[' -n 80590 ']' 00:20:00.755 02:17:15 -- nvmf/common.sh@478 -- # killprocess 80590 00:20:00.755 02:17:15 -- common/autotest_common.sh@926 -- # '[' -z 80590 ']' 00:20:00.755 02:17:15 -- common/autotest_common.sh@930 -- # kill -0 80590 00:20:00.755 02:17:15 -- common/autotest_common.sh@931 -- # uname 00:20:00.756 02:17:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:00.756 02:17:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 80590 00:20:00.756 killing process with pid 80590 00:20:00.756 02:17:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:00.756 02:17:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:00.756 02:17:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 80590' 00:20:00.756 02:17:15 -- common/autotest_common.sh@945 -- # kill 80590 00:20:00.756 02:17:15 -- common/autotest_common.sh@950 -- # wait 80590 00:20:01.016 02:17:15 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:01.016 02:17:15 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:01.016 02:17:15 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:01.016 02:17:15 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:01.016 02:17:15 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:01.016 02:17:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:01.016 02:17:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:01.016 02:17:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:01.016 02:17:15 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:01.016 00:20:01.016 real 0m2.533s 00:20:01.016 user 0m2.397s 00:20:01.016 sys 0m0.569s 00:20:01.016 02:17:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:01.016 02:17:15 -- common/autotest_common.sh@10 -- # set +x 00:20:01.016 ************************************ 00:20:01.016 END TEST nvmf_async_init 00:20:01.016 ************************************ 00:20:01.016 02:17:15 -- nvmf/nvmf.sh@93 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:01.016 02:17:15 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:01.016 02:17:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:01.016 02:17:15 -- common/autotest_common.sh@10 -- # set +x 00:20:01.016 ************************************ 00:20:01.016 START TEST dma 00:20:01.016 ************************************ 00:20:01.016 02:17:15 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:01.275 * Looking for test storage... 00:20:01.275 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:01.275 02:17:15 -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:01.275 02:17:15 -- nvmf/common.sh@7 -- # uname -s 00:20:01.275 02:17:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:01.275 02:17:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:01.275 02:17:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:01.275 02:17:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:01.275 02:17:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:01.275 02:17:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:01.275 02:17:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:01.275 02:17:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:01.275 02:17:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:01.275 02:17:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:01.275 02:17:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:20:01.275 02:17:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:20:01.275 02:17:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:01.275 02:17:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:01.275 02:17:15 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:01.275 02:17:15 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:01.275 02:17:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:01.275 02:17:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:01.275 02:17:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:01.275 02:17:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.275 02:17:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.276 02:17:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.276 02:17:15 -- paths/export.sh@5 -- # export PATH 00:20:01.276 02:17:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.276 02:17:15 -- nvmf/common.sh@46 -- # : 0 00:20:01.276 02:17:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:01.276 02:17:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:01.276 02:17:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:01.276 02:17:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:01.276 02:17:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:01.276 02:17:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:01.276 02:17:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:01.276 02:17:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:01.276 02:17:15 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:20:01.276 02:17:15 -- host/dma.sh@13 -- # exit 0 00:20:01.276 00:20:01.276 real 0m0.103s 00:20:01.276 user 0m0.047s 00:20:01.276 sys 0m0.062s 00:20:01.276 02:17:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:01.276 02:17:15 -- common/autotest_common.sh@10 -- # set +x 00:20:01.276 ************************************ 00:20:01.276 END TEST dma 00:20:01.276 ************************************ 00:20:01.276 02:17:15 -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:01.276 02:17:15 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:01.276 02:17:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:01.276 02:17:15 -- common/autotest_common.sh@10 -- # set +x 00:20:01.276 ************************************ 00:20:01.276 START TEST nvmf_identify 00:20:01.276 ************************************ 00:20:01.276 02:17:15 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:01.276 * Looking for test storage... 00:20:01.276 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:01.276 02:17:15 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:01.276 02:17:15 -- nvmf/common.sh@7 -- # uname -s 00:20:01.276 02:17:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:01.276 02:17:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:01.276 02:17:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:01.276 02:17:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:01.276 02:17:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:01.276 02:17:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:01.276 02:17:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:01.276 02:17:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:01.276 02:17:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:01.276 02:17:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:01.276 02:17:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:20:01.276 02:17:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:20:01.276 02:17:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:01.276 02:17:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:01.276 02:17:15 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:01.276 02:17:15 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:01.276 02:17:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:01.276 02:17:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:01.276 02:17:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:01.276 02:17:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.276 02:17:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.276 02:17:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.276 02:17:15 -- paths/export.sh@5 -- # export PATH 00:20:01.276 02:17:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.276 02:17:15 -- nvmf/common.sh@46 -- # : 0 00:20:01.276 02:17:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:01.276 02:17:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:01.276 02:17:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:01.276 02:17:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:01.276 02:17:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:01.276 02:17:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:01.276 02:17:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:01.276 02:17:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:01.276 02:17:15 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:01.276 02:17:15 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:01.276 02:17:15 -- host/identify.sh@14 -- # nvmftestinit 00:20:01.276 02:17:15 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:01.276 02:17:15 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:01.276 02:17:15 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:01.276 02:17:15 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:01.276 02:17:15 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:01.276 02:17:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:01.276 02:17:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:01.276 02:17:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:01.276 02:17:15 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:01.276 02:17:15 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:01.276 02:17:15 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:01.276 02:17:15 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:01.276 02:17:15 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:01.276 02:17:15 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:01.276 02:17:15 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:01.276 02:17:15 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:01.276 02:17:15 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:01.276 02:17:15 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:01.276 02:17:15 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:01.276 02:17:15 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:01.276 02:17:15 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:01.276 02:17:15 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:01.276 02:17:15 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:01.276 02:17:15 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:01.276 02:17:15 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:01.276 02:17:15 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:01.276 02:17:15 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:01.535 02:17:15 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:01.536 Cannot find device "nvmf_tgt_br" 00:20:01.536 02:17:15 -- nvmf/common.sh@154 -- # true 00:20:01.536 02:17:15 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:01.536 Cannot find device "nvmf_tgt_br2" 00:20:01.536 02:17:15 -- nvmf/common.sh@155 -- # true 00:20:01.536 02:17:15 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:01.536 02:17:15 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:01.536 Cannot find device "nvmf_tgt_br" 00:20:01.536 02:17:15 -- nvmf/common.sh@157 -- # true 00:20:01.536 02:17:15 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:01.536 Cannot find device "nvmf_tgt_br2" 00:20:01.536 02:17:15 -- nvmf/common.sh@158 -- # true 00:20:01.536 02:17:15 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:01.536 02:17:15 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:01.536 02:17:15 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:01.536 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:01.536 02:17:15 -- nvmf/common.sh@161 -- # true 00:20:01.536 02:17:15 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:01.536 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:01.536 02:17:15 -- nvmf/common.sh@162 -- # true 00:20:01.536 02:17:15 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:01.536 02:17:15 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:01.536 02:17:15 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:01.536 02:17:15 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:01.536 02:17:15 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:01.536 02:17:16 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:01.536 02:17:16 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:01.536 02:17:16 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:01.536 02:17:16 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:01.536 02:17:16 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:01.536 02:17:16 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:01.536 02:17:16 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:01.536 02:17:16 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:01.536 02:17:16 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:01.536 02:17:16 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:01.536 02:17:16 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:01.536 02:17:16 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:01.536 02:17:16 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:01.536 02:17:16 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:01.536 02:17:16 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:01.795 02:17:16 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:01.795 02:17:16 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:01.795 02:17:16 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:01.795 02:17:16 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:01.795 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:01.795 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.107 ms 00:20:01.795 00:20:01.795 --- 10.0.0.2 ping statistics --- 00:20:01.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:01.795 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:20:01.795 02:17:16 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:01.795 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:01.795 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:20:01.795 00:20:01.795 --- 10.0.0.3 ping statistics --- 00:20:01.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:01.795 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:20:01.795 02:17:16 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:01.795 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:01.795 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:20:01.795 00:20:01.795 --- 10.0.0.1 ping statistics --- 00:20:01.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:01.795 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:20:01.795 02:17:16 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:01.795 02:17:16 -- nvmf/common.sh@421 -- # return 0 00:20:01.795 02:17:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:01.795 02:17:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:01.795 02:17:16 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:01.795 02:17:16 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:01.795 02:17:16 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:01.795 02:17:16 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:01.795 02:17:16 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:01.795 02:17:16 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:20:01.795 02:17:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:01.795 02:17:16 -- common/autotest_common.sh@10 -- # set +x 00:20:01.795 02:17:16 -- host/identify.sh@19 -- # nvmfpid=80853 00:20:01.795 02:17:16 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:01.795 02:17:16 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:01.795 02:17:16 -- host/identify.sh@23 -- # waitforlisten 80853 00:20:01.795 02:17:16 -- common/autotest_common.sh@819 -- # '[' -z 80853 ']' 00:20:01.795 02:17:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:01.795 02:17:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:01.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:01.795 02:17:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:01.795 02:17:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:01.795 02:17:16 -- common/autotest_common.sh@10 -- # set +x 00:20:01.795 [2024-05-14 02:17:16.262176] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:20:01.795 [2024-05-14 02:17:16.262329] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:02.092 [2024-05-14 02:17:16.406351] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:02.092 [2024-05-14 02:17:16.473384] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:02.092 [2024-05-14 02:17:16.473599] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:02.092 [2024-05-14 02:17:16.473613] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:02.092 [2024-05-14 02:17:16.473622] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:02.092 [2024-05-14 02:17:16.474001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:02.092 [2024-05-14 02:17:16.474054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:02.092 [2024-05-14 02:17:16.474394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:02.092 [2024-05-14 02:17:16.474423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:03.030 02:17:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:03.030 02:17:17 -- common/autotest_common.sh@852 -- # return 0 00:20:03.030 02:17:17 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:03.030 02:17:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:03.030 02:17:17 -- common/autotest_common.sh@10 -- # set +x 00:20:03.030 [2024-05-14 02:17:17.309329] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:03.030 02:17:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:03.030 02:17:17 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:20:03.030 02:17:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:03.030 02:17:17 -- common/autotest_common.sh@10 -- # set +x 00:20:03.030 02:17:17 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:03.030 02:17:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:03.031 02:17:17 -- common/autotest_common.sh@10 -- # set +x 00:20:03.031 Malloc0 00:20:03.031 02:17:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:03.031 02:17:17 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:03.031 02:17:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:03.031 02:17:17 -- common/autotest_common.sh@10 -- # set +x 00:20:03.031 02:17:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:03.031 02:17:17 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:20:03.031 02:17:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:03.031 02:17:17 -- common/autotest_common.sh@10 -- # set +x 00:20:03.031 02:17:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:03.031 02:17:17 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:03.031 02:17:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:03.031 02:17:17 -- common/autotest_common.sh@10 -- # set +x 00:20:03.031 [2024-05-14 02:17:17.404955] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:03.031 02:17:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:03.031 02:17:17 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:03.031 02:17:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:03.031 02:17:17 -- common/autotest_common.sh@10 -- # set +x 00:20:03.031 02:17:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:03.031 02:17:17 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:20:03.031 02:17:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:03.031 02:17:17 -- common/autotest_common.sh@10 -- # set +x 00:20:03.031 [2024-05-14 02:17:17.420710] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:20:03.031 [ 00:20:03.031 { 00:20:03.031 "allow_any_host": true, 00:20:03.031 "hosts": [], 00:20:03.031 "listen_addresses": [ 00:20:03.031 { 00:20:03.031 "adrfam": "IPv4", 00:20:03.031 "traddr": "10.0.0.2", 00:20:03.031 "transport": "TCP", 00:20:03.031 "trsvcid": "4420", 00:20:03.031 "trtype": "TCP" 00:20:03.031 } 00:20:03.031 ], 00:20:03.031 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:03.031 "subtype": "Discovery" 00:20:03.031 }, 00:20:03.031 { 00:20:03.031 "allow_any_host": true, 00:20:03.031 "hosts": [], 00:20:03.031 "listen_addresses": [ 00:20:03.031 { 00:20:03.031 "adrfam": "IPv4", 00:20:03.031 "traddr": "10.0.0.2", 00:20:03.031 "transport": "TCP", 00:20:03.031 "trsvcid": "4420", 00:20:03.031 "trtype": "TCP" 00:20:03.031 } 00:20:03.031 ], 00:20:03.031 "max_cntlid": 65519, 00:20:03.031 "max_namespaces": 32, 00:20:03.031 "min_cntlid": 1, 00:20:03.031 "model_number": "SPDK bdev Controller", 00:20:03.031 "namespaces": [ 00:20:03.031 { 00:20:03.031 "bdev_name": "Malloc0", 00:20:03.031 "eui64": "ABCDEF0123456789", 00:20:03.031 "name": "Malloc0", 00:20:03.031 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:20:03.031 "nsid": 1, 00:20:03.031 "uuid": "7a64f922-fe5c-46ee-9ecb-b36ac3603496" 00:20:03.031 } 00:20:03.031 ], 00:20:03.031 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:03.031 "serial_number": "SPDK00000000000001", 00:20:03.031 "subtype": "NVMe" 00:20:03.031 } 00:20:03.031 ] 00:20:03.031 02:17:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:03.031 02:17:17 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:20:03.031 [2024-05-14 02:17:17.455913] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:20:03.031 [2024-05-14 02:17:17.455965] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80906 ] 00:20:03.031 [2024-05-14 02:17:17.595003] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:20:03.031 [2024-05-14 02:17:17.595077] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:03.031 [2024-05-14 02:17:17.595085] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:03.031 [2024-05-14 02:17:17.595097] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:03.031 [2024-05-14 02:17:17.595107] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:03.031 [2024-05-14 02:17:17.595228] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:20:03.031 [2024-05-14 02:17:17.595274] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x760270 0 00:20:03.031 [2024-05-14 02:17:17.607780] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:03.031 [2024-05-14 02:17:17.607803] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:03.031 [2024-05-14 02:17:17.607810] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:03.031 [2024-05-14 02:17:17.607813] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:03.031 [2024-05-14 02:17:17.607858] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.031 [2024-05-14 02:17:17.607866] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.031 [2024-05-14 02:17:17.607870] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x760270) 00:20:03.031 [2024-05-14 02:17:17.607884] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:03.031 [2024-05-14 02:17:17.607921] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x79f6d0, cid 0, qid 0 00:20:03.031 [2024-05-14 02:17:17.615785] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.031 [2024-05-14 02:17:17.615806] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.031 [2024-05-14 02:17:17.615812] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.031 [2024-05-14 02:17:17.615817] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x79f6d0) on tqpair=0x760270 00:20:03.031 [2024-05-14 02:17:17.615829] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:03.031 [2024-05-14 02:17:17.615837] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:20:03.031 [2024-05-14 02:17:17.615844] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:20:03.031 [2024-05-14 02:17:17.615863] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.031 [2024-05-14 02:17:17.615869] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.031 [2024-05-14 02:17:17.615873] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x760270) 00:20:03.031 [2024-05-14 02:17:17.615883] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.031 [2024-05-14 02:17:17.615911] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x79f6d0, cid 0, qid 0 00:20:03.031 [2024-05-14 02:17:17.615985] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.031 [2024-05-14 02:17:17.615993] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.031 [2024-05-14 02:17:17.615997] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.031 [2024-05-14 02:17:17.616001] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x79f6d0) on tqpair=0x760270 00:20:03.031 [2024-05-14 02:17:17.616012] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:20:03.031 [2024-05-14 02:17:17.616020] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:20:03.031 [2024-05-14 02:17:17.616028] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.031 [2024-05-14 02:17:17.616033] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.031 [2024-05-14 02:17:17.616037] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x760270) 00:20:03.031 [2024-05-14 02:17:17.616045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.031 [2024-05-14 02:17:17.616065] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x79f6d0, cid 0, qid 0 00:20:03.031 [2024-05-14 02:17:17.616121] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.031 [2024-05-14 02:17:17.616128] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.031 [2024-05-14 02:17:17.616132] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.031 [2024-05-14 02:17:17.616137] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x79f6d0) on tqpair=0x760270 00:20:03.031 [2024-05-14 02:17:17.616143] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:20:03.031 [2024-05-14 02:17:17.616152] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:20:03.031 [2024-05-14 02:17:17.616160] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.031 [2024-05-14 02:17:17.616164] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.031 [2024-05-14 02:17:17.616168] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x760270) 00:20:03.031 [2024-05-14 02:17:17.616175] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.031 [2024-05-14 02:17:17.616194] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x79f6d0, cid 0, qid 0 00:20:03.031 [2024-05-14 02:17:17.616246] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.031 [2024-05-14 02:17:17.616253] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.031 [2024-05-14 02:17:17.616257] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.031 [2024-05-14 02:17:17.616262] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x79f6d0) on tqpair=0x760270 00:20:03.031 [2024-05-14 02:17:17.616269] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:03.031 [2024-05-14 02:17:17.616279] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.031 [2024-05-14 02:17:17.616283] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.031 [2024-05-14 02:17:17.616287] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x760270) 00:20:03.031 [2024-05-14 02:17:17.616295] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.031 [2024-05-14 02:17:17.616313] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x79f6d0, cid 0, qid 0 00:20:03.031 [2024-05-14 02:17:17.616372] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.031 [2024-05-14 02:17:17.616379] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.031 [2024-05-14 02:17:17.616383] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.031 [2024-05-14 02:17:17.616387] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x79f6d0) on tqpair=0x760270 00:20:03.032 [2024-05-14 02:17:17.616392] nvme_ctrlr.c:3736:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:20:03.032 [2024-05-14 02:17:17.616398] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:20:03.032 [2024-05-14 02:17:17.616406] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:03.032 [2024-05-14 02:17:17.616512] nvme_ctrlr.c:3929:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:20:03.032 [2024-05-14 02:17:17.616523] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:03.032 [2024-05-14 02:17:17.616533] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.032 [2024-05-14 02:17:17.616538] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.032 [2024-05-14 02:17:17.616542] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x760270) 00:20:03.032 [2024-05-14 02:17:17.616550] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.032 [2024-05-14 02:17:17.616569] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x79f6d0, cid 0, qid 0 00:20:03.032 [2024-05-14 02:17:17.616625] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.032 [2024-05-14 02:17:17.616644] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.032 [2024-05-14 02:17:17.616649] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.032 [2024-05-14 02:17:17.616653] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x79f6d0) on tqpair=0x760270 00:20:03.032 [2024-05-14 02:17:17.616659] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:03.032 [2024-05-14 02:17:17.616670] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.032 [2024-05-14 02:17:17.616675] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.032 [2024-05-14 02:17:17.616679] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x760270) 00:20:03.032 [2024-05-14 02:17:17.616687] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.032 [2024-05-14 02:17:17.616706] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x79f6d0, cid 0, qid 0 00:20:03.032 [2024-05-14 02:17:17.616757] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.032 [2024-05-14 02:17:17.616776] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.032 [2024-05-14 02:17:17.616782] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.032 [2024-05-14 02:17:17.616786] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x79f6d0) on tqpair=0x760270 00:20:03.032 [2024-05-14 02:17:17.616791] nvme_ctrlr.c:3771:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:03.032 [2024-05-14 02:17:17.616797] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:20:03.032 [2024-05-14 02:17:17.616806] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:20:03.032 [2024-05-14 02:17:17.616816] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:20:03.032 [2024-05-14 02:17:17.616827] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.032 [2024-05-14 02:17:17.616831] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.032 [2024-05-14 02:17:17.616835] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x760270) 00:20:03.032 [2024-05-14 02:17:17.616844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.032 [2024-05-14 02:17:17.616865] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x79f6d0, cid 0, qid 0 00:20:03.032 [2024-05-14 02:17:17.616959] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:03.032 [2024-05-14 02:17:17.616974] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:03.032 [2024-05-14 02:17:17.616979] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:03.032 [2024-05-14 02:17:17.616984] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x760270): datao=0, datal=4096, cccid=0 00:20:03.032 [2024-05-14 02:17:17.616989] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x79f6d0) on tqpair(0x760270): expected_datao=0, payload_size=4096 00:20:03.032 [2024-05-14 02:17:17.616999] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:03.032 [2024-05-14 02:17:17.617004] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:03.032 [2024-05-14 02:17:17.617013] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.032 [2024-05-14 02:17:17.617020] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.032 [2024-05-14 02:17:17.617024] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.032 [2024-05-14 02:17:17.617028] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x79f6d0) on tqpair=0x760270 00:20:03.032 [2024-05-14 02:17:17.617037] nvme_ctrlr.c:1971:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:20:03.032 [2024-05-14 02:17:17.617047] nvme_ctrlr.c:1975:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:20:03.032 [2024-05-14 02:17:17.617052] nvme_ctrlr.c:1978:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:20:03.032 [2024-05-14 02:17:17.617058] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:20:03.032 [2024-05-14 02:17:17.617063] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:20:03.032 [2024-05-14 02:17:17.617069] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:20:03.032 [2024-05-14 02:17:17.617078] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:20:03.032 [2024-05-14 02:17:17.617086] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.032 [2024-05-14 02:17:17.617091] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.032 [2024-05-14 02:17:17.617095] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x760270) 00:20:03.032 [2024-05-14 02:17:17.617103] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:03.032 [2024-05-14 02:17:17.617124] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x79f6d0, cid 0, qid 0 00:20:03.032 [2024-05-14 02:17:17.617186] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.032 [2024-05-14 02:17:17.617193] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.032 [2024-05-14 02:17:17.617197] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.032 [2024-05-14 02:17:17.617202] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x79f6d0) on tqpair=0x760270 00:20:03.032 [2024-05-14 02:17:17.617211] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.032 [2024-05-14 02:17:17.617215] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.032 [2024-05-14 02:17:17.617219] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x760270) 00:20:03.032 [2024-05-14 02:17:17.617226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.032 [2024-05-14 02:17:17.617233] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.032 [2024-05-14 02:17:17.617237] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.032 [2024-05-14 02:17:17.617241] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x760270) 00:20:03.032 [2024-05-14 02:17:17.617247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.032 [2024-05-14 02:17:17.617254] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.032 [2024-05-14 02:17:17.617258] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.032 [2024-05-14 02:17:17.617262] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x760270) 00:20:03.032 [2024-05-14 02:17:17.617268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.032 [2024-05-14 02:17:17.617275] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.032 [2024-05-14 02:17:17.617280] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.032 [2024-05-14 02:17:17.617283] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x760270) 00:20:03.032 [2024-05-14 02:17:17.617290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.032 [2024-05-14 02:17:17.617296] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:20:03.032 [2024-05-14 02:17:17.617309] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:03.032 [2024-05-14 02:17:17.617317] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.032 [2024-05-14 02:17:17.617321] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.032 [2024-05-14 02:17:17.617325] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x760270) 00:20:03.032 [2024-05-14 02:17:17.617333] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.032 [2024-05-14 02:17:17.617354] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x79f6d0, cid 0, qid 0 00:20:03.032 [2024-05-14 02:17:17.617361] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x79f830, cid 1, qid 0 00:20:03.032 [2024-05-14 02:17:17.617366] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x79f990, cid 2, qid 0 00:20:03.032 [2024-05-14 02:17:17.617371] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x79faf0, cid 3, qid 0 00:20:03.032 [2024-05-14 02:17:17.617377] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x79fc50, cid 4, qid 0 00:20:03.032 [2024-05-14 02:17:17.617480] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.032 [2024-05-14 02:17:17.617486] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.032 [2024-05-14 02:17:17.617490] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.032 [2024-05-14 02:17:17.617495] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x79fc50) on tqpair=0x760270 00:20:03.032 [2024-05-14 02:17:17.617500] nvme_ctrlr.c:2889:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:20:03.032 [2024-05-14 02:17:17.617506] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:20:03.032 [2024-05-14 02:17:17.617518] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.032 [2024-05-14 02:17:17.617523] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.032 [2024-05-14 02:17:17.617527] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x760270) 00:20:03.032 [2024-05-14 02:17:17.617534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.032 [2024-05-14 02:17:17.617552] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x79fc50, cid 4, qid 0 00:20:03.032 [2024-05-14 02:17:17.617628] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:03.032 [2024-05-14 02:17:17.617639] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:03.033 [2024-05-14 02:17:17.617643] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:03.033 [2024-05-14 02:17:17.617648] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x760270): datao=0, datal=4096, cccid=4 00:20:03.033 [2024-05-14 02:17:17.617653] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x79fc50) on tqpair(0x760270): expected_datao=0, payload_size=4096 00:20:03.033 [2024-05-14 02:17:17.617662] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:03.033 [2024-05-14 02:17:17.617666] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:03.033 [2024-05-14 02:17:17.617675] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.033 [2024-05-14 02:17:17.617681] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.033 [2024-05-14 02:17:17.617686] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.033 [2024-05-14 02:17:17.617690] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x79fc50) on tqpair=0x760270 00:20:03.033 [2024-05-14 02:17:17.617704] nvme_ctrlr.c:4023:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:20:03.033 [2024-05-14 02:17:17.617724] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.033 [2024-05-14 02:17:17.617729] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.033 [2024-05-14 02:17:17.617733] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x760270) 00:20:03.033 [2024-05-14 02:17:17.617741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.033 [2024-05-14 02:17:17.617749] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.033 [2024-05-14 02:17:17.617753] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.033 [2024-05-14 02:17:17.617757] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x760270) 00:20:03.033 [2024-05-14 02:17:17.617775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.033 [2024-05-14 02:17:17.617811] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x79fc50, cid 4, qid 0 00:20:03.033 [2024-05-14 02:17:17.617830] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x79fdb0, cid 5, qid 0 00:20:03.033 [2024-05-14 02:17:17.617947] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:03.033 [2024-05-14 02:17:17.617959] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:03.033 [2024-05-14 02:17:17.617963] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:03.033 [2024-05-14 02:17:17.617967] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x760270): datao=0, datal=1024, cccid=4 00:20:03.033 [2024-05-14 02:17:17.617973] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x79fc50) on tqpair(0x760270): expected_datao=0, payload_size=1024 00:20:03.033 [2024-05-14 02:17:17.617981] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:03.033 [2024-05-14 02:17:17.617985] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:03.033 [2024-05-14 02:17:17.617992] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.293 [2024-05-14 02:17:17.617998] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.293 [2024-05-14 02:17:17.618002] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.293 [2024-05-14 02:17:17.618006] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x79fdb0) on tqpair=0x760270 00:20:03.293 [2024-05-14 02:17:17.663780] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.293 [2024-05-14 02:17:17.663804] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.293 [2024-05-14 02:17:17.663810] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.293 [2024-05-14 02:17:17.663815] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x79fc50) on tqpair=0x760270 00:20:03.293 [2024-05-14 02:17:17.663833] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.293 [2024-05-14 02:17:17.663839] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.293 [2024-05-14 02:17:17.663843] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x760270) 00:20:03.293 [2024-05-14 02:17:17.663853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.293 [2024-05-14 02:17:17.663888] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x79fc50, cid 4, qid 0 00:20:03.293 [2024-05-14 02:17:17.663978] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:03.293 [2024-05-14 02:17:17.663986] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:03.293 [2024-05-14 02:17:17.663990] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:03.293 [2024-05-14 02:17:17.663994] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x760270): datao=0, datal=3072, cccid=4 00:20:03.293 [2024-05-14 02:17:17.663999] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x79fc50) on tqpair(0x760270): expected_datao=0, payload_size=3072 00:20:03.293 [2024-05-14 02:17:17.664008] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:03.293 [2024-05-14 02:17:17.664012] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:03.293 [2024-05-14 02:17:17.664021] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.293 [2024-05-14 02:17:17.664028] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.293 [2024-05-14 02:17:17.664032] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.293 [2024-05-14 02:17:17.664036] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x79fc50) on tqpair=0x760270 00:20:03.293 [2024-05-14 02:17:17.664047] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.293 [2024-05-14 02:17:17.664052] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.293 [2024-05-14 02:17:17.664056] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x760270) 00:20:03.293 [2024-05-14 02:17:17.664063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.293 [2024-05-14 02:17:17.664090] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x79fc50, cid 4, qid 0 00:20:03.293 [2024-05-14 02:17:17.664163] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:03.293 [2024-05-14 02:17:17.664170] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:03.293 [2024-05-14 02:17:17.664174] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:03.293 [2024-05-14 02:17:17.664178] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x760270): datao=0, datal=8, cccid=4 00:20:03.293 [2024-05-14 02:17:17.664183] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x79fc50) on tqpair(0x760270): expected_datao=0, payload_size=8 00:20:03.293 [2024-05-14 02:17:17.664192] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:03.293 [2024-05-14 02:17:17.664196] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:03.293 [2024-05-14 02:17:17.705832] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.293 [2024-05-14 02:17:17.705857] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.293 [2024-05-14 02:17:17.705863] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.293 [2024-05-14 02:17:17.705868] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x79fc50) on tqpair=0x760270 00:20:03.293 ===================================================== 00:20:03.293 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:03.293 ===================================================== 00:20:03.293 Controller Capabilities/Features 00:20:03.293 ================================ 00:20:03.293 Vendor ID: 0000 00:20:03.293 Subsystem Vendor ID: 0000 00:20:03.293 Serial Number: .................... 00:20:03.293 Model Number: ........................................ 00:20:03.293 Firmware Version: 24.01.1 00:20:03.293 Recommended Arb Burst: 0 00:20:03.293 IEEE OUI Identifier: 00 00 00 00:20:03.293 Multi-path I/O 00:20:03.293 May have multiple subsystem ports: No 00:20:03.293 May have multiple controllers: No 00:20:03.293 Associated with SR-IOV VF: No 00:20:03.293 Max Data Transfer Size: 131072 00:20:03.293 Max Number of Namespaces: 0 00:20:03.293 Max Number of I/O Queues: 1024 00:20:03.293 NVMe Specification Version (VS): 1.3 00:20:03.293 NVMe Specification Version (Identify): 1.3 00:20:03.293 Maximum Queue Entries: 128 00:20:03.293 Contiguous Queues Required: Yes 00:20:03.293 Arbitration Mechanisms Supported 00:20:03.293 Weighted Round Robin: Not Supported 00:20:03.293 Vendor Specific: Not Supported 00:20:03.293 Reset Timeout: 15000 ms 00:20:03.293 Doorbell Stride: 4 bytes 00:20:03.293 NVM Subsystem Reset: Not Supported 00:20:03.293 Command Sets Supported 00:20:03.293 NVM Command Set: Supported 00:20:03.293 Boot Partition: Not Supported 00:20:03.293 Memory Page Size Minimum: 4096 bytes 00:20:03.293 Memory Page Size Maximum: 4096 bytes 00:20:03.293 Persistent Memory Region: Not Supported 00:20:03.293 Optional Asynchronous Events Supported 00:20:03.293 Namespace Attribute Notices: Not Supported 00:20:03.293 Firmware Activation Notices: Not Supported 00:20:03.293 ANA Change Notices: Not Supported 00:20:03.293 PLE Aggregate Log Change Notices: Not Supported 00:20:03.293 LBA Status Info Alert Notices: Not Supported 00:20:03.293 EGE Aggregate Log Change Notices: Not Supported 00:20:03.293 Normal NVM Subsystem Shutdown event: Not Supported 00:20:03.293 Zone Descriptor Change Notices: Not Supported 00:20:03.293 Discovery Log Change Notices: Supported 00:20:03.293 Controller Attributes 00:20:03.293 128-bit Host Identifier: Not Supported 00:20:03.293 Non-Operational Permissive Mode: Not Supported 00:20:03.293 NVM Sets: Not Supported 00:20:03.293 Read Recovery Levels: Not Supported 00:20:03.293 Endurance Groups: Not Supported 00:20:03.293 Predictable Latency Mode: Not Supported 00:20:03.293 Traffic Based Keep ALive: Not Supported 00:20:03.293 Namespace Granularity: Not Supported 00:20:03.293 SQ Associations: Not Supported 00:20:03.293 UUID List: Not Supported 00:20:03.293 Multi-Domain Subsystem: Not Supported 00:20:03.293 Fixed Capacity Management: Not Supported 00:20:03.293 Variable Capacity Management: Not Supported 00:20:03.293 Delete Endurance Group: Not Supported 00:20:03.293 Delete NVM Set: Not Supported 00:20:03.293 Extended LBA Formats Supported: Not Supported 00:20:03.293 Flexible Data Placement Supported: Not Supported 00:20:03.293 00:20:03.293 Controller Memory Buffer Support 00:20:03.293 ================================ 00:20:03.293 Supported: No 00:20:03.293 00:20:03.293 Persistent Memory Region Support 00:20:03.293 ================================ 00:20:03.293 Supported: No 00:20:03.293 00:20:03.293 Admin Command Set Attributes 00:20:03.293 ============================ 00:20:03.293 Security Send/Receive: Not Supported 00:20:03.293 Format NVM: Not Supported 00:20:03.293 Firmware Activate/Download: Not Supported 00:20:03.293 Namespace Management: Not Supported 00:20:03.293 Device Self-Test: Not Supported 00:20:03.293 Directives: Not Supported 00:20:03.293 NVMe-MI: Not Supported 00:20:03.293 Virtualization Management: Not Supported 00:20:03.293 Doorbell Buffer Config: Not Supported 00:20:03.293 Get LBA Status Capability: Not Supported 00:20:03.293 Command & Feature Lockdown Capability: Not Supported 00:20:03.293 Abort Command Limit: 1 00:20:03.293 Async Event Request Limit: 4 00:20:03.293 Number of Firmware Slots: N/A 00:20:03.293 Firmware Slot 1 Read-Only: N/A 00:20:03.293 Firmware Activation Without Reset: N/A 00:20:03.293 Multiple Update Detection Support: N/A 00:20:03.293 Firmware Update Granularity: No Information Provided 00:20:03.293 Per-Namespace SMART Log: No 00:20:03.293 Asymmetric Namespace Access Log Page: Not Supported 00:20:03.293 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:03.293 Command Effects Log Page: Not Supported 00:20:03.293 Get Log Page Extended Data: Supported 00:20:03.293 Telemetry Log Pages: Not Supported 00:20:03.293 Persistent Event Log Pages: Not Supported 00:20:03.293 Supported Log Pages Log Page: May Support 00:20:03.293 Commands Supported & Effects Log Page: Not Supported 00:20:03.293 Feature Identifiers & Effects Log Page:May Support 00:20:03.293 NVMe-MI Commands & Effects Log Page: May Support 00:20:03.293 Data Area 4 for Telemetry Log: Not Supported 00:20:03.293 Error Log Page Entries Supported: 128 00:20:03.293 Keep Alive: Not Supported 00:20:03.293 00:20:03.293 NVM Command Set Attributes 00:20:03.293 ========================== 00:20:03.293 Submission Queue Entry Size 00:20:03.293 Max: 1 00:20:03.293 Min: 1 00:20:03.293 Completion Queue Entry Size 00:20:03.293 Max: 1 00:20:03.293 Min: 1 00:20:03.293 Number of Namespaces: 0 00:20:03.293 Compare Command: Not Supported 00:20:03.294 Write Uncorrectable Command: Not Supported 00:20:03.294 Dataset Management Command: Not Supported 00:20:03.294 Write Zeroes Command: Not Supported 00:20:03.294 Set Features Save Field: Not Supported 00:20:03.294 Reservations: Not Supported 00:20:03.294 Timestamp: Not Supported 00:20:03.294 Copy: Not Supported 00:20:03.294 Volatile Write Cache: Not Present 00:20:03.294 Atomic Write Unit (Normal): 1 00:20:03.294 Atomic Write Unit (PFail): 1 00:20:03.294 Atomic Compare & Write Unit: 1 00:20:03.294 Fused Compare & Write: Supported 00:20:03.294 Scatter-Gather List 00:20:03.294 SGL Command Set: Supported 00:20:03.294 SGL Keyed: Supported 00:20:03.294 SGL Bit Bucket Descriptor: Not Supported 00:20:03.294 SGL Metadata Pointer: Not Supported 00:20:03.294 Oversized SGL: Not Supported 00:20:03.294 SGL Metadata Address: Not Supported 00:20:03.294 SGL Offset: Supported 00:20:03.294 Transport SGL Data Block: Not Supported 00:20:03.294 Replay Protected Memory Block: Not Supported 00:20:03.294 00:20:03.294 Firmware Slot Information 00:20:03.294 ========================= 00:20:03.294 Active slot: 0 00:20:03.294 00:20:03.294 00:20:03.294 Error Log 00:20:03.294 ========= 00:20:03.294 00:20:03.294 Active Namespaces 00:20:03.294 ================= 00:20:03.294 Discovery Log Page 00:20:03.294 ================== 00:20:03.294 Generation Counter: 2 00:20:03.294 Number of Records: 2 00:20:03.294 Record Format: 0 00:20:03.294 00:20:03.294 Discovery Log Entry 0 00:20:03.294 ---------------------- 00:20:03.294 Transport Type: 3 (TCP) 00:20:03.294 Address Family: 1 (IPv4) 00:20:03.294 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:03.294 Entry Flags: 00:20:03.294 Duplicate Returned Information: 1 00:20:03.294 Explicit Persistent Connection Support for Discovery: 1 00:20:03.294 Transport Requirements: 00:20:03.294 Secure Channel: Not Required 00:20:03.294 Port ID: 0 (0x0000) 00:20:03.294 Controller ID: 65535 (0xffff) 00:20:03.294 Admin Max SQ Size: 128 00:20:03.294 Transport Service Identifier: 4420 00:20:03.294 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:03.294 Transport Address: 10.0.0.2 00:20:03.294 Discovery Log Entry 1 00:20:03.294 ---------------------- 00:20:03.294 Transport Type: 3 (TCP) 00:20:03.294 Address Family: 1 (IPv4) 00:20:03.294 Subsystem Type: 2 (NVM Subsystem) 00:20:03.294 Entry Flags: 00:20:03.294 Duplicate Returned Information: 0 00:20:03.294 Explicit Persistent Connection Support for Discovery: 0 00:20:03.294 Transport Requirements: 00:20:03.294 Secure Channel: Not Required 00:20:03.294 Port ID: 0 (0x0000) 00:20:03.294 Controller ID: 65535 (0xffff) 00:20:03.294 Admin Max SQ Size: 128 00:20:03.294 Transport Service Identifier: 4420 00:20:03.294 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:20:03.294 Transport Address: 10.0.0.2 [2024-05-14 02:17:17.705978] nvme_ctrlr.c:4206:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:20:03.294 [2024-05-14 02:17:17.705997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.294 [2024-05-14 02:17:17.706005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.294 [2024-05-14 02:17:17.706012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.294 [2024-05-14 02:17:17.706018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.294 [2024-05-14 02:17:17.706032] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.294 [2024-05-14 02:17:17.706037] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.294 [2024-05-14 02:17:17.706041] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x760270) 00:20:03.294 [2024-05-14 02:17:17.706051] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.294 [2024-05-14 02:17:17.706078] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x79faf0, cid 3, qid 0 00:20:03.294 [2024-05-14 02:17:17.706142] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.294 [2024-05-14 02:17:17.706149] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.294 [2024-05-14 02:17:17.706153] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.294 [2024-05-14 02:17:17.706158] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x79faf0) on tqpair=0x760270 00:20:03.294 [2024-05-14 02:17:17.706166] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.294 [2024-05-14 02:17:17.706171] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.294 [2024-05-14 02:17:17.706175] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x760270) 00:20:03.294 [2024-05-14 02:17:17.706182] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.294 [2024-05-14 02:17:17.706206] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x79faf0, cid 3, qid 0 00:20:03.294 [2024-05-14 02:17:17.706283] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.294 [2024-05-14 02:17:17.706290] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.294 [2024-05-14 02:17:17.706294] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.294 [2024-05-14 02:17:17.706299] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x79faf0) on tqpair=0x760270 00:20:03.294 [2024-05-14 02:17:17.706304] nvme_ctrlr.c:1069:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:20:03.294 [2024-05-14 02:17:17.706310] nvme_ctrlr.c:1072:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:20:03.294 [2024-05-14 02:17:17.706321] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.294 [2024-05-14 02:17:17.706325] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.294 [2024-05-14 02:17:17.706329] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x760270) 00:20:03.294 [2024-05-14 02:17:17.706337] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.294 [2024-05-14 02:17:17.706355] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x79faf0, cid 3, qid 0 00:20:03.294 [2024-05-14 02:17:17.706409] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.294 [2024-05-14 02:17:17.706416] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.294 [2024-05-14 02:17:17.706420] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.294 [2024-05-14 02:17:17.706425] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x79faf0) on tqpair=0x760270 00:20:03.294 [2024-05-14 02:17:17.706436] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.294 [2024-05-14 02:17:17.706441] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.294 [2024-05-14 02:17:17.706445] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x760270) 00:20:03.294 [2024-05-14 02:17:17.706453] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.294 [2024-05-14 02:17:17.706471] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x79faf0, cid 3, qid 0 00:20:03.294 [2024-05-14 02:17:17.706527] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.294 [2024-05-14 02:17:17.706534] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.294 [2024-05-14 02:17:17.706539] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.294 [2024-05-14 02:17:17.706543] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x79faf0) on tqpair=0x760270 00:20:03.294 [2024-05-14 02:17:17.706554] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.294 [2024-05-14 02:17:17.706558] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.294 [2024-05-14 02:17:17.706562] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x760270) 00:20:03.294 [2024-05-14 02:17:17.706570] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.294 [2024-05-14 02:17:17.706588] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x79faf0, cid 3, qid 0 00:20:03.294 [2024-05-14 02:17:17.706641] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.294 [2024-05-14 02:17:17.706648] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.294 [2024-05-14 02:17:17.706652] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.294 [2024-05-14 02:17:17.706657] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x79faf0) on tqpair=0x760270 00:20:03.294 [2024-05-14 02:17:17.706667] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.294 [2024-05-14 02:17:17.706672] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.294 [2024-05-14 02:17:17.706676] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x760270) 00:20:03.294 [2024-05-14 02:17:17.706683] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.294 [2024-05-14 02:17:17.706701] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x79faf0, cid 3, qid 0 00:20:03.294 [2024-05-14 02:17:17.706753] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.294 [2024-05-14 02:17:17.706760] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.294 [2024-05-14 02:17:17.710783] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.294 [2024-05-14 02:17:17.710792] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x79faf0) on tqpair=0x760270 00:20:03.294 [2024-05-14 02:17:17.710808] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.294 [2024-05-14 02:17:17.710814] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.294 [2024-05-14 02:17:17.710818] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x760270) 00:20:03.294 [2024-05-14 02:17:17.710827] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.294 [2024-05-14 02:17:17.710853] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x79faf0, cid 3, qid 0 00:20:03.294 [2024-05-14 02:17:17.710916] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.294 [2024-05-14 02:17:17.710924] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.294 [2024-05-14 02:17:17.710928] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.294 [2024-05-14 02:17:17.710932] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x79faf0) on tqpair=0x760270 00:20:03.295 [2024-05-14 02:17:17.710941] nvme_ctrlr.c:1191:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:20:03.295 00:20:03.295 02:17:17 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:20:03.295 [2024-05-14 02:17:17.743438] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:20:03.295 [2024-05-14 02:17:17.743487] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80913 ] 00:20:03.559 [2024-05-14 02:17:17.884140] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:20:03.559 [2024-05-14 02:17:17.884218] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:03.559 [2024-05-14 02:17:17.884226] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:03.559 [2024-05-14 02:17:17.884241] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:03.559 [2024-05-14 02:17:17.884251] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:03.559 [2024-05-14 02:17:17.884387] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:20:03.559 [2024-05-14 02:17:17.884438] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x98a270 0 00:20:03.559 [2024-05-14 02:17:17.888782] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:03.559 [2024-05-14 02:17:17.888808] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:03.559 [2024-05-14 02:17:17.888815] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:03.559 [2024-05-14 02:17:17.888819] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:03.559 [2024-05-14 02:17:17.888865] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.559 [2024-05-14 02:17:17.888874] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.559 [2024-05-14 02:17:17.888878] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x98a270) 00:20:03.559 [2024-05-14 02:17:17.888893] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:03.559 [2024-05-14 02:17:17.888927] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c96d0, cid 0, qid 0 00:20:03.559 [2024-05-14 02:17:17.896785] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.559 [2024-05-14 02:17:17.896807] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.559 [2024-05-14 02:17:17.896813] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.559 [2024-05-14 02:17:17.896818] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c96d0) on tqpair=0x98a270 00:20:03.560 [2024-05-14 02:17:17.896834] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:03.560 [2024-05-14 02:17:17.896844] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:20:03.560 [2024-05-14 02:17:17.896850] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:20:03.560 [2024-05-14 02:17:17.896870] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.560 [2024-05-14 02:17:17.896876] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.560 [2024-05-14 02:17:17.896880] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x98a270) 00:20:03.560 [2024-05-14 02:17:17.896891] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.560 [2024-05-14 02:17:17.896921] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c96d0, cid 0, qid 0 00:20:03.560 [2024-05-14 02:17:17.896993] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.560 [2024-05-14 02:17:17.897001] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.560 [2024-05-14 02:17:17.897005] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.560 [2024-05-14 02:17:17.897010] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c96d0) on tqpair=0x98a270 00:20:03.560 [2024-05-14 02:17:17.897021] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:20:03.560 [2024-05-14 02:17:17.897030] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:20:03.560 [2024-05-14 02:17:17.897039] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.560 [2024-05-14 02:17:17.897044] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.560 [2024-05-14 02:17:17.897048] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x98a270) 00:20:03.560 [2024-05-14 02:17:17.897057] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.560 [2024-05-14 02:17:17.897078] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c96d0, cid 0, qid 0 00:20:03.560 [2024-05-14 02:17:17.897137] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.560 [2024-05-14 02:17:17.897146] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.560 [2024-05-14 02:17:17.897150] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.560 [2024-05-14 02:17:17.897155] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c96d0) on tqpair=0x98a270 00:20:03.560 [2024-05-14 02:17:17.897161] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:20:03.560 [2024-05-14 02:17:17.897171] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:20:03.560 [2024-05-14 02:17:17.897179] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.560 [2024-05-14 02:17:17.897183] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.560 [2024-05-14 02:17:17.897188] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x98a270) 00:20:03.560 [2024-05-14 02:17:17.897196] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.560 [2024-05-14 02:17:17.897216] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c96d0, cid 0, qid 0 00:20:03.560 [2024-05-14 02:17:17.897272] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.560 [2024-05-14 02:17:17.897279] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.560 [2024-05-14 02:17:17.897283] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.560 [2024-05-14 02:17:17.897288] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c96d0) on tqpair=0x98a270 00:20:03.560 [2024-05-14 02:17:17.897294] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:03.560 [2024-05-14 02:17:17.897305] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.560 [2024-05-14 02:17:17.897310] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.560 [2024-05-14 02:17:17.897315] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x98a270) 00:20:03.560 [2024-05-14 02:17:17.897323] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.560 [2024-05-14 02:17:17.897342] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c96d0, cid 0, qid 0 00:20:03.560 [2024-05-14 02:17:17.897400] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.560 [2024-05-14 02:17:17.897412] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.560 [2024-05-14 02:17:17.897417] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.560 [2024-05-14 02:17:17.897422] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c96d0) on tqpair=0x98a270 00:20:03.560 [2024-05-14 02:17:17.897428] nvme_ctrlr.c:3736:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:20:03.560 [2024-05-14 02:17:17.897433] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:20:03.560 [2024-05-14 02:17:17.897442] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:03.560 [2024-05-14 02:17:17.897549] nvme_ctrlr.c:3929:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:20:03.560 [2024-05-14 02:17:17.897554] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:03.560 [2024-05-14 02:17:17.897563] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.560 [2024-05-14 02:17:17.897568] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.560 [2024-05-14 02:17:17.897572] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x98a270) 00:20:03.560 [2024-05-14 02:17:17.897581] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.560 [2024-05-14 02:17:17.897601] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c96d0, cid 0, qid 0 00:20:03.560 [2024-05-14 02:17:17.897660] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.560 [2024-05-14 02:17:17.897676] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.560 [2024-05-14 02:17:17.897682] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.560 [2024-05-14 02:17:17.897686] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c96d0) on tqpair=0x98a270 00:20:03.560 [2024-05-14 02:17:17.897692] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:03.560 [2024-05-14 02:17:17.897704] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.560 [2024-05-14 02:17:17.897709] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.560 [2024-05-14 02:17:17.897713] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x98a270) 00:20:03.560 [2024-05-14 02:17:17.897721] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.560 [2024-05-14 02:17:17.897742] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c96d0, cid 0, qid 0 00:20:03.560 [2024-05-14 02:17:17.897814] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.560 [2024-05-14 02:17:17.897834] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.560 [2024-05-14 02:17:17.897839] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.560 [2024-05-14 02:17:17.897844] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c96d0) on tqpair=0x98a270 00:20:03.560 [2024-05-14 02:17:17.897850] nvme_ctrlr.c:3771:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:03.560 [2024-05-14 02:17:17.897856] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:20:03.560 [2024-05-14 02:17:17.897865] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:20:03.560 [2024-05-14 02:17:17.897877] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:20:03.560 [2024-05-14 02:17:17.897888] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.560 [2024-05-14 02:17:17.897892] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.560 [2024-05-14 02:17:17.897897] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x98a270) 00:20:03.560 [2024-05-14 02:17:17.897905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.560 [2024-05-14 02:17:17.897929] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c96d0, cid 0, qid 0 00:20:03.560 [2024-05-14 02:17:17.898053] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:03.560 [2024-05-14 02:17:17.898072] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:03.560 [2024-05-14 02:17:17.898081] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:03.560 [2024-05-14 02:17:17.898088] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x98a270): datao=0, datal=4096, cccid=0 00:20:03.560 [2024-05-14 02:17:17.898097] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9c96d0) on tqpair(0x98a270): expected_datao=0, payload_size=4096 00:20:03.560 [2024-05-14 02:17:17.898111] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:03.560 [2024-05-14 02:17:17.898117] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:03.560 [2024-05-14 02:17:17.898128] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.560 [2024-05-14 02:17:17.898135] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.560 [2024-05-14 02:17:17.898139] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.560 [2024-05-14 02:17:17.898144] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c96d0) on tqpair=0x98a270 00:20:03.560 [2024-05-14 02:17:17.898153] nvme_ctrlr.c:1971:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:20:03.560 [2024-05-14 02:17:17.898164] nvme_ctrlr.c:1975:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:20:03.560 [2024-05-14 02:17:17.898170] nvme_ctrlr.c:1978:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:20:03.560 [2024-05-14 02:17:17.898175] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:20:03.560 [2024-05-14 02:17:17.898181] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:20:03.560 [2024-05-14 02:17:17.898187] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:20:03.560 [2024-05-14 02:17:17.898198] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:20:03.560 [2024-05-14 02:17:17.898211] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.560 [2024-05-14 02:17:17.898219] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.560 [2024-05-14 02:17:17.898226] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x98a270) 00:20:03.560 [2024-05-14 02:17:17.898236] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:03.560 [2024-05-14 02:17:17.898262] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c96d0, cid 0, qid 0 00:20:03.560 [2024-05-14 02:17:17.898322] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.560 [2024-05-14 02:17:17.898331] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.561 [2024-05-14 02:17:17.898336] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.561 [2024-05-14 02:17:17.898340] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c96d0) on tqpair=0x98a270 00:20:03.561 [2024-05-14 02:17:17.898350] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.561 [2024-05-14 02:17:17.898354] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.561 [2024-05-14 02:17:17.898358] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x98a270) 00:20:03.561 [2024-05-14 02:17:17.898366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.561 [2024-05-14 02:17:17.898373] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.561 [2024-05-14 02:17:17.898378] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.561 [2024-05-14 02:17:17.898382] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x98a270) 00:20:03.561 [2024-05-14 02:17:17.898388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.561 [2024-05-14 02:17:17.898395] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.561 [2024-05-14 02:17:17.898399] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.561 [2024-05-14 02:17:17.898403] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x98a270) 00:20:03.561 [2024-05-14 02:17:17.898410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.561 [2024-05-14 02:17:17.898417] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.561 [2024-05-14 02:17:17.898421] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.561 [2024-05-14 02:17:17.898426] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98a270) 00:20:03.561 [2024-05-14 02:17:17.898432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.561 [2024-05-14 02:17:17.898438] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:03.561 [2024-05-14 02:17:17.898453] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:03.561 [2024-05-14 02:17:17.898461] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.561 [2024-05-14 02:17:17.898465] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.561 [2024-05-14 02:17:17.898470] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x98a270) 00:20:03.561 [2024-05-14 02:17:17.898477] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.561 [2024-05-14 02:17:17.898502] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c96d0, cid 0, qid 0 00:20:03.561 [2024-05-14 02:17:17.898515] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9830, cid 1, qid 0 00:20:03.561 [2024-05-14 02:17:17.898520] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9990, cid 2, qid 0 00:20:03.561 [2024-05-14 02:17:17.898526] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9af0, cid 3, qid 0 00:20:03.561 [2024-05-14 02:17:17.898531] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9c50, cid 4, qid 0 00:20:03.561 [2024-05-14 02:17:17.898633] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.561 [2024-05-14 02:17:17.898640] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.561 [2024-05-14 02:17:17.898645] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.561 [2024-05-14 02:17:17.898649] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9c50) on tqpair=0x98a270 00:20:03.561 [2024-05-14 02:17:17.898655] nvme_ctrlr.c:2889:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:20:03.561 [2024-05-14 02:17:17.898661] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:03.561 [2024-05-14 02:17:17.898670] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:20:03.561 [2024-05-14 02:17:17.898678] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:20:03.561 [2024-05-14 02:17:17.898685] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.561 [2024-05-14 02:17:17.898690] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.561 [2024-05-14 02:17:17.898694] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x98a270) 00:20:03.561 [2024-05-14 02:17:17.898702] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:03.561 [2024-05-14 02:17:17.898722] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9c50, cid 4, qid 0 00:20:03.561 [2024-05-14 02:17:17.898799] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.561 [2024-05-14 02:17:17.898808] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.561 [2024-05-14 02:17:17.898813] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.561 [2024-05-14 02:17:17.898817] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9c50) on tqpair=0x98a270 00:20:03.561 [2024-05-14 02:17:17.898871] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:20:03.561 [2024-05-14 02:17:17.898887] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:20:03.561 [2024-05-14 02:17:17.898897] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.561 [2024-05-14 02:17:17.898902] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.561 [2024-05-14 02:17:17.898906] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x98a270) 00:20:03.561 [2024-05-14 02:17:17.898914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.561 [2024-05-14 02:17:17.898937] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9c50, cid 4, qid 0 00:20:03.561 [2024-05-14 02:17:17.899007] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:03.561 [2024-05-14 02:17:17.899015] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:03.561 [2024-05-14 02:17:17.899019] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:03.561 [2024-05-14 02:17:17.899023] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x98a270): datao=0, datal=4096, cccid=4 00:20:03.561 [2024-05-14 02:17:17.899032] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9c9c50) on tqpair(0x98a270): expected_datao=0, payload_size=4096 00:20:03.561 [2024-05-14 02:17:17.899045] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:03.561 [2024-05-14 02:17:17.899052] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:03.561 [2024-05-14 02:17:17.899066] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.561 [2024-05-14 02:17:17.899077] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.561 [2024-05-14 02:17:17.899084] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.561 [2024-05-14 02:17:17.899090] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9c50) on tqpair=0x98a270 00:20:03.561 [2024-05-14 02:17:17.899111] nvme_ctrlr.c:4542:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:20:03.561 [2024-05-14 02:17:17.899123] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:20:03.561 [2024-05-14 02:17:17.899136] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:20:03.561 [2024-05-14 02:17:17.899145] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.561 [2024-05-14 02:17:17.899150] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.561 [2024-05-14 02:17:17.899154] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x98a270) 00:20:03.561 [2024-05-14 02:17:17.899163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.561 [2024-05-14 02:17:17.899187] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9c50, cid 4, qid 0 00:20:03.561 [2024-05-14 02:17:17.899267] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:03.561 [2024-05-14 02:17:17.899280] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:03.561 [2024-05-14 02:17:17.899285] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:03.561 [2024-05-14 02:17:17.899289] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x98a270): datao=0, datal=4096, cccid=4 00:20:03.561 [2024-05-14 02:17:17.899294] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9c9c50) on tqpair(0x98a270): expected_datao=0, payload_size=4096 00:20:03.561 [2024-05-14 02:17:17.899303] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:03.561 [2024-05-14 02:17:17.899308] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:03.561 [2024-05-14 02:17:17.899317] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.561 [2024-05-14 02:17:17.899324] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.561 [2024-05-14 02:17:17.899328] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.561 [2024-05-14 02:17:17.899332] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9c50) on tqpair=0x98a270 00:20:03.561 [2024-05-14 02:17:17.899349] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:03.561 [2024-05-14 02:17:17.899361] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:03.561 [2024-05-14 02:17:17.899370] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.561 [2024-05-14 02:17:17.899375] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.561 [2024-05-14 02:17:17.899379] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x98a270) 00:20:03.561 [2024-05-14 02:17:17.899387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.561 [2024-05-14 02:17:17.899409] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9c50, cid 4, qid 0 00:20:03.561 [2024-05-14 02:17:17.899485] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:03.561 [2024-05-14 02:17:17.899492] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:03.561 [2024-05-14 02:17:17.899496] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:03.561 [2024-05-14 02:17:17.899501] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x98a270): datao=0, datal=4096, cccid=4 00:20:03.561 [2024-05-14 02:17:17.899506] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9c9c50) on tqpair(0x98a270): expected_datao=0, payload_size=4096 00:20:03.561 [2024-05-14 02:17:17.899515] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:03.561 [2024-05-14 02:17:17.899519] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:03.561 [2024-05-14 02:17:17.899528] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.561 [2024-05-14 02:17:17.899535] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.561 [2024-05-14 02:17:17.899539] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.561 [2024-05-14 02:17:17.899543] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9c50) on tqpair=0x98a270 00:20:03.561 [2024-05-14 02:17:17.899553] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:03.561 [2024-05-14 02:17:17.899562] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:20:03.562 [2024-05-14 02:17:17.899575] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:20:03.562 [2024-05-14 02:17:17.899583] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:03.562 [2024-05-14 02:17:17.899588] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:20:03.562 [2024-05-14 02:17:17.899594] nvme_ctrlr.c:2977:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:20:03.562 [2024-05-14 02:17:17.899599] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:20:03.562 [2024-05-14 02:17:17.899605] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:20:03.562 [2024-05-14 02:17:17.899622] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.562 [2024-05-14 02:17:17.899628] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.562 [2024-05-14 02:17:17.899632] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x98a270) 00:20:03.562 [2024-05-14 02:17:17.899640] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.562 [2024-05-14 02:17:17.899648] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.562 [2024-05-14 02:17:17.899652] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.562 [2024-05-14 02:17:17.899657] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x98a270) 00:20:03.562 [2024-05-14 02:17:17.899663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.562 [2024-05-14 02:17:17.899690] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9c50, cid 4, qid 0 00:20:03.562 [2024-05-14 02:17:17.899698] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9db0, cid 5, qid 0 00:20:03.562 [2024-05-14 02:17:17.899794] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.562 [2024-05-14 02:17:17.899804] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.562 [2024-05-14 02:17:17.899808] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.562 [2024-05-14 02:17:17.899812] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9c50) on tqpair=0x98a270 00:20:03.562 [2024-05-14 02:17:17.899820] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.562 [2024-05-14 02:17:17.899827] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.562 [2024-05-14 02:17:17.899831] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.562 [2024-05-14 02:17:17.899835] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9db0) on tqpair=0x98a270 00:20:03.562 [2024-05-14 02:17:17.899846] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.562 [2024-05-14 02:17:17.899852] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.562 [2024-05-14 02:17:17.899856] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x98a270) 00:20:03.562 [2024-05-14 02:17:17.899864] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.562 [2024-05-14 02:17:17.899885] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9db0, cid 5, qid 0 00:20:03.562 [2024-05-14 02:17:17.899946] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.562 [2024-05-14 02:17:17.899953] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.562 [2024-05-14 02:17:17.899957] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.562 [2024-05-14 02:17:17.899962] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9db0) on tqpair=0x98a270 00:20:03.562 [2024-05-14 02:17:17.899973] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.562 [2024-05-14 02:17:17.899978] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.562 [2024-05-14 02:17:17.899982] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x98a270) 00:20:03.562 [2024-05-14 02:17:17.899990] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.562 [2024-05-14 02:17:17.900009] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9db0, cid 5, qid 0 00:20:03.562 [2024-05-14 02:17:17.900065] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.562 [2024-05-14 02:17:17.900073] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.562 [2024-05-14 02:17:17.900077] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.562 [2024-05-14 02:17:17.900082] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9db0) on tqpair=0x98a270 00:20:03.562 [2024-05-14 02:17:17.900093] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.562 [2024-05-14 02:17:17.900098] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.562 [2024-05-14 02:17:17.900102] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x98a270) 00:20:03.562 [2024-05-14 02:17:17.900110] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.562 [2024-05-14 02:17:17.900129] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9db0, cid 5, qid 0 00:20:03.562 [2024-05-14 02:17:17.900193] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.562 [2024-05-14 02:17:17.900206] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.562 [2024-05-14 02:17:17.900213] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.562 [2024-05-14 02:17:17.900221] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9db0) on tqpair=0x98a270 00:20:03.562 [2024-05-14 02:17:17.900238] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.562 [2024-05-14 02:17:17.900243] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.562 [2024-05-14 02:17:17.900248] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x98a270) 00:20:03.562 [2024-05-14 02:17:17.900256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.562 [2024-05-14 02:17:17.900265] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.562 [2024-05-14 02:17:17.900269] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.562 [2024-05-14 02:17:17.900273] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x98a270) 00:20:03.562 [2024-05-14 02:17:17.900280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.562 [2024-05-14 02:17:17.900289] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.562 [2024-05-14 02:17:17.900293] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.562 [2024-05-14 02:17:17.900297] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x98a270) 00:20:03.562 [2024-05-14 02:17:17.900304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.562 [2024-05-14 02:17:17.900312] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.562 [2024-05-14 02:17:17.900317] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.562 [2024-05-14 02:17:17.900324] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x98a270) 00:20:03.562 [2024-05-14 02:17:17.900334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.562 [2024-05-14 02:17:17.900367] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9db0, cid 5, qid 0 00:20:03.562 [2024-05-14 02:17:17.900376] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9c50, cid 4, qid 0 00:20:03.562 [2024-05-14 02:17:17.900381] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9f10, cid 6, qid 0 00:20:03.562 [2024-05-14 02:17:17.900386] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ca070, cid 7, qid 0 00:20:03.562 [2024-05-14 02:17:17.900527] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:03.562 [2024-05-14 02:17:17.900547] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:03.562 [2024-05-14 02:17:17.900552] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:03.562 [2024-05-14 02:17:17.900556] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x98a270): datao=0, datal=8192, cccid=5 00:20:03.562 [2024-05-14 02:17:17.900562] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9c9db0) on tqpair(0x98a270): expected_datao=0, payload_size=8192 00:20:03.562 [2024-05-14 02:17:17.900582] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:03.562 [2024-05-14 02:17:17.900587] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:03.562 [2024-05-14 02:17:17.900594] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:03.562 [2024-05-14 02:17:17.900600] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:03.562 [2024-05-14 02:17:17.900604] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:03.562 [2024-05-14 02:17:17.900609] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x98a270): datao=0, datal=512, cccid=4 00:20:03.562 [2024-05-14 02:17:17.900614] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9c9c50) on tqpair(0x98a270): expected_datao=0, payload_size=512 00:20:03.562 [2024-05-14 02:17:17.900622] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:03.562 [2024-05-14 02:17:17.900626] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:03.562 [2024-05-14 02:17:17.900632] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:03.562 [2024-05-14 02:17:17.900638] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:03.562 [2024-05-14 02:17:17.900642] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:03.562 [2024-05-14 02:17:17.900646] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x98a270): datao=0, datal=512, cccid=6 00:20:03.562 [2024-05-14 02:17:17.900651] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9c9f10) on tqpair(0x98a270): expected_datao=0, payload_size=512 00:20:03.562 [2024-05-14 02:17:17.900659] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:03.562 [2024-05-14 02:17:17.900663] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:03.562 [2024-05-14 02:17:17.900669] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:03.562 [2024-05-14 02:17:17.900676] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:03.562 [2024-05-14 02:17:17.900680] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:03.562 [2024-05-14 02:17:17.900684] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x98a270): datao=0, datal=4096, cccid=7 00:20:03.562 [2024-05-14 02:17:17.900689] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9ca070) on tqpair(0x98a270): expected_datao=0, payload_size=4096 00:20:03.562 [2024-05-14 02:17:17.900697] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:03.562 [2024-05-14 02:17:17.900701] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:03.562 [2024-05-14 02:17:17.900708] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.562 [2024-05-14 02:17:17.900714] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.562 [2024-05-14 02:17:17.900718] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.562 [2024-05-14 02:17:17.900722] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9db0) on tqpair=0x98a270 00:20:03.562 [2024-05-14 02:17:17.900741] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.562 [2024-05-14 02:17:17.900748] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.563 [2024-05-14 02:17:17.900752] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.563 [2024-05-14 02:17:17.900757] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9c50) on tqpair=0x98a270 00:20:03.563 [2024-05-14 02:17:17.904797] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.563 [2024-05-14 02:17:17.904819] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.563 [2024-05-14 02:17:17.904825] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.563 [2024-05-14 02:17:17.904830] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9f10) on tqpair=0x98a270 00:20:03.563 [2024-05-14 02:17:17.904839] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.563 [2024-05-14 02:17:17.904846] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.563 [2024-05-14 02:17:17.904851] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.563 [2024-05-14 02:17:17.904855] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9ca070) on tqpair=0x98a270 00:20:03.563 ===================================================== 00:20:03.563 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:03.563 ===================================================== 00:20:03.563 Controller Capabilities/Features 00:20:03.563 ================================ 00:20:03.563 Vendor ID: 8086 00:20:03.563 Subsystem Vendor ID: 8086 00:20:03.563 Serial Number: SPDK00000000000001 00:20:03.563 Model Number: SPDK bdev Controller 00:20:03.563 Firmware Version: 24.01.1 00:20:03.563 Recommended Arb Burst: 6 00:20:03.563 IEEE OUI Identifier: e4 d2 5c 00:20:03.563 Multi-path I/O 00:20:03.563 May have multiple subsystem ports: Yes 00:20:03.563 May have multiple controllers: Yes 00:20:03.563 Associated with SR-IOV VF: No 00:20:03.563 Max Data Transfer Size: 131072 00:20:03.563 Max Number of Namespaces: 32 00:20:03.563 Max Number of I/O Queues: 127 00:20:03.563 NVMe Specification Version (VS): 1.3 00:20:03.563 NVMe Specification Version (Identify): 1.3 00:20:03.563 Maximum Queue Entries: 128 00:20:03.563 Contiguous Queues Required: Yes 00:20:03.563 Arbitration Mechanisms Supported 00:20:03.563 Weighted Round Robin: Not Supported 00:20:03.563 Vendor Specific: Not Supported 00:20:03.563 Reset Timeout: 15000 ms 00:20:03.563 Doorbell Stride: 4 bytes 00:20:03.563 NVM Subsystem Reset: Not Supported 00:20:03.563 Command Sets Supported 00:20:03.563 NVM Command Set: Supported 00:20:03.563 Boot Partition: Not Supported 00:20:03.563 Memory Page Size Minimum: 4096 bytes 00:20:03.563 Memory Page Size Maximum: 4096 bytes 00:20:03.563 Persistent Memory Region: Not Supported 00:20:03.563 Optional Asynchronous Events Supported 00:20:03.563 Namespace Attribute Notices: Supported 00:20:03.563 Firmware Activation Notices: Not Supported 00:20:03.563 ANA Change Notices: Not Supported 00:20:03.563 PLE Aggregate Log Change Notices: Not Supported 00:20:03.563 LBA Status Info Alert Notices: Not Supported 00:20:03.563 EGE Aggregate Log Change Notices: Not Supported 00:20:03.563 Normal NVM Subsystem Shutdown event: Not Supported 00:20:03.563 Zone Descriptor Change Notices: Not Supported 00:20:03.563 Discovery Log Change Notices: Not Supported 00:20:03.563 Controller Attributes 00:20:03.563 128-bit Host Identifier: Supported 00:20:03.563 Non-Operational Permissive Mode: Not Supported 00:20:03.563 NVM Sets: Not Supported 00:20:03.563 Read Recovery Levels: Not Supported 00:20:03.563 Endurance Groups: Not Supported 00:20:03.563 Predictable Latency Mode: Not Supported 00:20:03.563 Traffic Based Keep ALive: Not Supported 00:20:03.563 Namespace Granularity: Not Supported 00:20:03.563 SQ Associations: Not Supported 00:20:03.563 UUID List: Not Supported 00:20:03.563 Multi-Domain Subsystem: Not Supported 00:20:03.563 Fixed Capacity Management: Not Supported 00:20:03.563 Variable Capacity Management: Not Supported 00:20:03.563 Delete Endurance Group: Not Supported 00:20:03.563 Delete NVM Set: Not Supported 00:20:03.563 Extended LBA Formats Supported: Not Supported 00:20:03.563 Flexible Data Placement Supported: Not Supported 00:20:03.563 00:20:03.563 Controller Memory Buffer Support 00:20:03.563 ================================ 00:20:03.563 Supported: No 00:20:03.563 00:20:03.563 Persistent Memory Region Support 00:20:03.563 ================================ 00:20:03.563 Supported: No 00:20:03.563 00:20:03.563 Admin Command Set Attributes 00:20:03.563 ============================ 00:20:03.563 Security Send/Receive: Not Supported 00:20:03.563 Format NVM: Not Supported 00:20:03.563 Firmware Activate/Download: Not Supported 00:20:03.563 Namespace Management: Not Supported 00:20:03.563 Device Self-Test: Not Supported 00:20:03.563 Directives: Not Supported 00:20:03.563 NVMe-MI: Not Supported 00:20:03.563 Virtualization Management: Not Supported 00:20:03.563 Doorbell Buffer Config: Not Supported 00:20:03.563 Get LBA Status Capability: Not Supported 00:20:03.563 Command & Feature Lockdown Capability: Not Supported 00:20:03.563 Abort Command Limit: 4 00:20:03.563 Async Event Request Limit: 4 00:20:03.563 Number of Firmware Slots: N/A 00:20:03.563 Firmware Slot 1 Read-Only: N/A 00:20:03.563 Firmware Activation Without Reset: N/A 00:20:03.563 Multiple Update Detection Support: N/A 00:20:03.563 Firmware Update Granularity: No Information Provided 00:20:03.563 Per-Namespace SMART Log: No 00:20:03.563 Asymmetric Namespace Access Log Page: Not Supported 00:20:03.563 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:20:03.563 Command Effects Log Page: Supported 00:20:03.563 Get Log Page Extended Data: Supported 00:20:03.563 Telemetry Log Pages: Not Supported 00:20:03.563 Persistent Event Log Pages: Not Supported 00:20:03.563 Supported Log Pages Log Page: May Support 00:20:03.563 Commands Supported & Effects Log Page: Not Supported 00:20:03.563 Feature Identifiers & Effects Log Page:May Support 00:20:03.563 NVMe-MI Commands & Effects Log Page: May Support 00:20:03.563 Data Area 4 for Telemetry Log: Not Supported 00:20:03.563 Error Log Page Entries Supported: 128 00:20:03.563 Keep Alive: Supported 00:20:03.563 Keep Alive Granularity: 10000 ms 00:20:03.563 00:20:03.563 NVM Command Set Attributes 00:20:03.563 ========================== 00:20:03.563 Submission Queue Entry Size 00:20:03.563 Max: 64 00:20:03.563 Min: 64 00:20:03.563 Completion Queue Entry Size 00:20:03.563 Max: 16 00:20:03.563 Min: 16 00:20:03.563 Number of Namespaces: 32 00:20:03.563 Compare Command: Supported 00:20:03.563 Write Uncorrectable Command: Not Supported 00:20:03.563 Dataset Management Command: Supported 00:20:03.563 Write Zeroes Command: Supported 00:20:03.563 Set Features Save Field: Not Supported 00:20:03.563 Reservations: Supported 00:20:03.563 Timestamp: Not Supported 00:20:03.563 Copy: Supported 00:20:03.563 Volatile Write Cache: Present 00:20:03.563 Atomic Write Unit (Normal): 1 00:20:03.563 Atomic Write Unit (PFail): 1 00:20:03.563 Atomic Compare & Write Unit: 1 00:20:03.563 Fused Compare & Write: Supported 00:20:03.563 Scatter-Gather List 00:20:03.563 SGL Command Set: Supported 00:20:03.563 SGL Keyed: Supported 00:20:03.563 SGL Bit Bucket Descriptor: Not Supported 00:20:03.563 SGL Metadata Pointer: Not Supported 00:20:03.563 Oversized SGL: Not Supported 00:20:03.563 SGL Metadata Address: Not Supported 00:20:03.563 SGL Offset: Supported 00:20:03.563 Transport SGL Data Block: Not Supported 00:20:03.563 Replay Protected Memory Block: Not Supported 00:20:03.563 00:20:03.563 Firmware Slot Information 00:20:03.563 ========================= 00:20:03.563 Active slot: 1 00:20:03.563 Slot 1 Firmware Revision: 24.01.1 00:20:03.563 00:20:03.563 00:20:03.563 Commands Supported and Effects 00:20:03.563 ============================== 00:20:03.563 Admin Commands 00:20:03.563 -------------- 00:20:03.563 Get Log Page (02h): Supported 00:20:03.563 Identify (06h): Supported 00:20:03.563 Abort (08h): Supported 00:20:03.563 Set Features (09h): Supported 00:20:03.563 Get Features (0Ah): Supported 00:20:03.563 Asynchronous Event Request (0Ch): Supported 00:20:03.563 Keep Alive (18h): Supported 00:20:03.563 I/O Commands 00:20:03.563 ------------ 00:20:03.563 Flush (00h): Supported LBA-Change 00:20:03.563 Write (01h): Supported LBA-Change 00:20:03.563 Read (02h): Supported 00:20:03.563 Compare (05h): Supported 00:20:03.563 Write Zeroes (08h): Supported LBA-Change 00:20:03.563 Dataset Management (09h): Supported LBA-Change 00:20:03.563 Copy (19h): Supported LBA-Change 00:20:03.563 Unknown (79h): Supported LBA-Change 00:20:03.563 Unknown (7Ah): Supported 00:20:03.563 00:20:03.563 Error Log 00:20:03.563 ========= 00:20:03.563 00:20:03.563 Arbitration 00:20:03.563 =========== 00:20:03.563 Arbitration Burst: 1 00:20:03.563 00:20:03.563 Power Management 00:20:03.563 ================ 00:20:03.563 Number of Power States: 1 00:20:03.563 Current Power State: Power State #0 00:20:03.563 Power State #0: 00:20:03.563 Max Power: 0.00 W 00:20:03.563 Non-Operational State: Operational 00:20:03.563 Entry Latency: Not Reported 00:20:03.563 Exit Latency: Not Reported 00:20:03.563 Relative Read Throughput: 0 00:20:03.563 Relative Read Latency: 0 00:20:03.563 Relative Write Throughput: 0 00:20:03.563 Relative Write Latency: 0 00:20:03.564 Idle Power: Not Reported 00:20:03.564 Active Power: Not Reported 00:20:03.564 Non-Operational Permissive Mode: Not Supported 00:20:03.564 00:20:03.564 Health Information 00:20:03.564 ================== 00:20:03.564 Critical Warnings: 00:20:03.564 Available Spare Space: OK 00:20:03.564 Temperature: OK 00:20:03.564 Device Reliability: OK 00:20:03.564 Read Only: No 00:20:03.564 Volatile Memory Backup: OK 00:20:03.564 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:03.564 Temperature Threshold: [2024-05-14 02:17:17.904975] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.564 [2024-05-14 02:17:17.904984] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.564 [2024-05-14 02:17:17.904988] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x98a270) 00:20:03.564 [2024-05-14 02:17:17.904998] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.564 [2024-05-14 02:17:17.905029] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ca070, cid 7, qid 0 00:20:03.564 [2024-05-14 02:17:17.905106] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.564 [2024-05-14 02:17:17.905114] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.564 [2024-05-14 02:17:17.905118] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.564 [2024-05-14 02:17:17.905123] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9ca070) on tqpair=0x98a270 00:20:03.564 [2024-05-14 02:17:17.905164] nvme_ctrlr.c:4206:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:20:03.564 [2024-05-14 02:17:17.905179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.564 [2024-05-14 02:17:17.905187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.564 [2024-05-14 02:17:17.905195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.564 [2024-05-14 02:17:17.905205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.564 [2024-05-14 02:17:17.905220] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.564 [2024-05-14 02:17:17.905229] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.564 [2024-05-14 02:17:17.905240] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98a270) 00:20:03.564 [2024-05-14 02:17:17.905252] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.564 [2024-05-14 02:17:17.905283] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9af0, cid 3, qid 0 00:20:03.564 [2024-05-14 02:17:17.905341] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.564 [2024-05-14 02:17:17.905349] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.564 [2024-05-14 02:17:17.905353] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.564 [2024-05-14 02:17:17.905358] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9af0) on tqpair=0x98a270 00:20:03.564 [2024-05-14 02:17:17.905366] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.564 [2024-05-14 02:17:17.905371] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.564 [2024-05-14 02:17:17.905376] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98a270) 00:20:03.564 [2024-05-14 02:17:17.905384] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.564 [2024-05-14 02:17:17.905408] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9af0, cid 3, qid 0 00:20:03.564 [2024-05-14 02:17:17.905485] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.564 [2024-05-14 02:17:17.905493] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.564 [2024-05-14 02:17:17.905497] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.564 [2024-05-14 02:17:17.905502] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9af0) on tqpair=0x98a270 00:20:03.564 [2024-05-14 02:17:17.905507] nvme_ctrlr.c:1069:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:20:03.564 [2024-05-14 02:17:17.905513] nvme_ctrlr.c:1072:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:20:03.564 [2024-05-14 02:17:17.905524] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.564 [2024-05-14 02:17:17.905529] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.564 [2024-05-14 02:17:17.905534] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98a270) 00:20:03.564 [2024-05-14 02:17:17.905542] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.564 [2024-05-14 02:17:17.905561] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9af0, cid 3, qid 0 00:20:03.564 [2024-05-14 02:17:17.905616] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.564 [2024-05-14 02:17:17.905624] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.564 [2024-05-14 02:17:17.905628] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.564 [2024-05-14 02:17:17.905632] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9af0) on tqpair=0x98a270 00:20:03.564 [2024-05-14 02:17:17.905644] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.564 [2024-05-14 02:17:17.905649] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.564 [2024-05-14 02:17:17.905654] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98a270) 00:20:03.564 [2024-05-14 02:17:17.905662] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.564 [2024-05-14 02:17:17.905681] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9af0, cid 3, qid 0 00:20:03.564 [2024-05-14 02:17:17.905737] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.564 [2024-05-14 02:17:17.905758] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.564 [2024-05-14 02:17:17.905776] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.564 [2024-05-14 02:17:17.905782] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9af0) on tqpair=0x98a270 00:20:03.564 [2024-05-14 02:17:17.905795] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.564 [2024-05-14 02:17:17.905800] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.564 [2024-05-14 02:17:17.905804] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98a270) 00:20:03.564 [2024-05-14 02:17:17.905812] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.564 [2024-05-14 02:17:17.905847] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9af0, cid 3, qid 0 00:20:03.564 [2024-05-14 02:17:17.905911] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.564 [2024-05-14 02:17:17.905919] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.564 [2024-05-14 02:17:17.905923] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.564 [2024-05-14 02:17:17.905927] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9af0) on tqpair=0x98a270 00:20:03.564 [2024-05-14 02:17:17.905939] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.564 [2024-05-14 02:17:17.905944] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.564 [2024-05-14 02:17:17.905948] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98a270) 00:20:03.564 [2024-05-14 02:17:17.905957] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.564 [2024-05-14 02:17:17.905976] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9af0, cid 3, qid 0 00:20:03.564 [2024-05-14 02:17:17.906031] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.564 [2024-05-14 02:17:17.906038] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.564 [2024-05-14 02:17:17.906042] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.564 [2024-05-14 02:17:17.906047] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9af0) on tqpair=0x98a270 00:20:03.564 [2024-05-14 02:17:17.906058] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.564 [2024-05-14 02:17:17.906063] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.564 [2024-05-14 02:17:17.906067] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98a270) 00:20:03.564 [2024-05-14 02:17:17.906075] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.564 [2024-05-14 02:17:17.906094] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9af0, cid 3, qid 0 00:20:03.564 [2024-05-14 02:17:17.906148] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.564 [2024-05-14 02:17:17.906156] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.564 [2024-05-14 02:17:17.906160] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.564 [2024-05-14 02:17:17.906164] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9af0) on tqpair=0x98a270 00:20:03.564 [2024-05-14 02:17:17.906175] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.564 [2024-05-14 02:17:17.906181] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.565 [2024-05-14 02:17:17.906185] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98a270) 00:20:03.565 [2024-05-14 02:17:17.906193] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.565 [2024-05-14 02:17:17.906212] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9af0, cid 3, qid 0 00:20:03.565 [2024-05-14 02:17:17.906266] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.565 [2024-05-14 02:17:17.906278] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.565 [2024-05-14 02:17:17.906283] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.565 [2024-05-14 02:17:17.906287] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9af0) on tqpair=0x98a270 00:20:03.565 [2024-05-14 02:17:17.906299] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.565 [2024-05-14 02:17:17.906304] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.565 [2024-05-14 02:17:17.906308] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98a270) 00:20:03.565 [2024-05-14 02:17:17.906316] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.565 [2024-05-14 02:17:17.906336] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9af0, cid 3, qid 0 00:20:03.565 [2024-05-14 02:17:17.906390] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.565 [2024-05-14 02:17:17.906398] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.565 [2024-05-14 02:17:17.906402] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.565 [2024-05-14 02:17:17.906407] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9af0) on tqpair=0x98a270 00:20:03.565 [2024-05-14 02:17:17.906417] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.565 [2024-05-14 02:17:17.906423] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.565 [2024-05-14 02:17:17.906427] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98a270) 00:20:03.565 [2024-05-14 02:17:17.906435] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.565 [2024-05-14 02:17:17.906454] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9af0, cid 3, qid 0 00:20:03.565 [2024-05-14 02:17:17.906509] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.565 [2024-05-14 02:17:17.906517] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.565 [2024-05-14 02:17:17.906521] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.565 [2024-05-14 02:17:17.906525] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9af0) on tqpair=0x98a270 00:20:03.565 [2024-05-14 02:17:17.906536] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.565 [2024-05-14 02:17:17.906541] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.565 [2024-05-14 02:17:17.906545] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98a270) 00:20:03.565 [2024-05-14 02:17:17.906553] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.565 [2024-05-14 02:17:17.906573] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9af0, cid 3, qid 0 00:20:03.565 [2024-05-14 02:17:17.906625] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.565 [2024-05-14 02:17:17.906633] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.565 [2024-05-14 02:17:17.906638] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.565 [2024-05-14 02:17:17.906642] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9af0) on tqpair=0x98a270 00:20:03.565 [2024-05-14 02:17:17.906653] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.565 [2024-05-14 02:17:17.906659] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.565 [2024-05-14 02:17:17.906663] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98a270) 00:20:03.565 [2024-05-14 02:17:17.906671] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.565 [2024-05-14 02:17:17.906690] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9af0, cid 3, qid 0 00:20:03.565 [2024-05-14 02:17:17.906745] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.565 [2024-05-14 02:17:17.906753] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.565 [2024-05-14 02:17:17.906757] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.565 [2024-05-14 02:17:17.906772] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9af0) on tqpair=0x98a270 00:20:03.565 [2024-05-14 02:17:17.906786] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.565 [2024-05-14 02:17:17.906791] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.565 [2024-05-14 02:17:17.906795] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98a270) 00:20:03.565 [2024-05-14 02:17:17.906803] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.565 [2024-05-14 02:17:17.906825] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9af0, cid 3, qid 0 00:20:03.565 [2024-05-14 02:17:17.906889] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.565 [2024-05-14 02:17:17.906897] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.565 [2024-05-14 02:17:17.906901] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.565 [2024-05-14 02:17:17.906905] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9af0) on tqpair=0x98a270 00:20:03.565 [2024-05-14 02:17:17.906916] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.565 [2024-05-14 02:17:17.906921] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.565 [2024-05-14 02:17:17.906926] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98a270) 00:20:03.565 [2024-05-14 02:17:17.906933] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.565 [2024-05-14 02:17:17.906953] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9af0, cid 3, qid 0 00:20:03.565 [2024-05-14 02:17:17.907010] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.565 [2024-05-14 02:17:17.907017] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.565 [2024-05-14 02:17:17.907022] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.565 [2024-05-14 02:17:17.907026] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9af0) on tqpair=0x98a270 00:20:03.565 [2024-05-14 02:17:17.907037] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.565 [2024-05-14 02:17:17.907042] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.565 [2024-05-14 02:17:17.907047] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98a270) 00:20:03.565 [2024-05-14 02:17:17.907054] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.565 [2024-05-14 02:17:17.907073] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9af0, cid 3, qid 0 00:20:03.565 [2024-05-14 02:17:17.907129] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.565 [2024-05-14 02:17:17.907136] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.565 [2024-05-14 02:17:17.907141] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.565 [2024-05-14 02:17:17.907145] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9af0) on tqpair=0x98a270 00:20:03.565 [2024-05-14 02:17:17.907156] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.565 [2024-05-14 02:17:17.907161] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.565 [2024-05-14 02:17:17.907165] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98a270) 00:20:03.565 [2024-05-14 02:17:17.907173] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.565 [2024-05-14 02:17:17.907193] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9af0, cid 3, qid 0 00:20:03.565 [2024-05-14 02:17:17.907250] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.565 [2024-05-14 02:17:17.907257] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.565 [2024-05-14 02:17:17.907262] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.565 [2024-05-14 02:17:17.907266] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9af0) on tqpair=0x98a270 00:20:03.565 [2024-05-14 02:17:17.907277] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.565 [2024-05-14 02:17:17.907282] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.565 [2024-05-14 02:17:17.907286] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98a270) 00:20:03.565 [2024-05-14 02:17:17.907294] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.565 [2024-05-14 02:17:17.907313] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9af0, cid 3, qid 0 00:20:03.565 [2024-05-14 02:17:17.907371] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.565 [2024-05-14 02:17:17.907379] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.565 [2024-05-14 02:17:17.907383] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.565 [2024-05-14 02:17:17.907388] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9af0) on tqpair=0x98a270 00:20:03.565 [2024-05-14 02:17:17.907399] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.565 [2024-05-14 02:17:17.907404] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.565 [2024-05-14 02:17:17.907409] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98a270) 00:20:03.565 [2024-05-14 02:17:17.907416] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.565 [2024-05-14 02:17:17.907436] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9af0, cid 3, qid 0 00:20:03.565 [2024-05-14 02:17:17.907489] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.565 [2024-05-14 02:17:17.907496] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.565 [2024-05-14 02:17:17.907500] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.565 [2024-05-14 02:17:17.907505] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9af0) on tqpair=0x98a270 00:20:03.565 [2024-05-14 02:17:17.907516] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.565 [2024-05-14 02:17:17.907521] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.565 [2024-05-14 02:17:17.907525] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98a270) 00:20:03.565 [2024-05-14 02:17:17.907533] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.565 [2024-05-14 02:17:17.907552] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9af0, cid 3, qid 0 00:20:03.565 [2024-05-14 02:17:17.907606] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.565 [2024-05-14 02:17:17.907613] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.565 [2024-05-14 02:17:17.907618] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.565 [2024-05-14 02:17:17.907622] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9af0) on tqpair=0x98a270 00:20:03.565 [2024-05-14 02:17:17.907633] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.565 [2024-05-14 02:17:17.907638] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.565 [2024-05-14 02:17:17.907642] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98a270) 00:20:03.566 [2024-05-14 02:17:17.907650] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.566 [2024-05-14 02:17:17.907669] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9af0, cid 3, qid 0 00:20:03.566 [2024-05-14 02:17:17.907724] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.566 [2024-05-14 02:17:17.907732] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.566 [2024-05-14 02:17:17.907736] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.566 [2024-05-14 02:17:17.907741] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9af0) on tqpair=0x98a270 00:20:03.566 [2024-05-14 02:17:17.907752] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.566 [2024-05-14 02:17:17.907757] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.566 [2024-05-14 02:17:17.907772] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98a270) 00:20:03.566 [2024-05-14 02:17:17.907781] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.566 [2024-05-14 02:17:17.907803] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9af0, cid 3, qid 0 00:20:03.566 [2024-05-14 02:17:17.907863] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.566 [2024-05-14 02:17:17.907870] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.566 [2024-05-14 02:17:17.907874] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.566 [2024-05-14 02:17:17.907879] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9af0) on tqpair=0x98a270 00:20:03.566 [2024-05-14 02:17:17.907890] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.566 [2024-05-14 02:17:17.907895] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.566 [2024-05-14 02:17:17.907900] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98a270) 00:20:03.566 [2024-05-14 02:17:17.907907] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.566 [2024-05-14 02:17:17.907927] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9af0, cid 3, qid 0 00:20:03.566 [2024-05-14 02:17:17.907981] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.566 [2024-05-14 02:17:17.907993] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.566 [2024-05-14 02:17:17.907997] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.566 [2024-05-14 02:17:17.908002] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9af0) on tqpair=0x98a270 00:20:03.566 [2024-05-14 02:17:17.908013] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.566 [2024-05-14 02:17:17.908019] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.566 [2024-05-14 02:17:17.908023] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98a270) 00:20:03.566 [2024-05-14 02:17:17.908031] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.566 [2024-05-14 02:17:17.908057] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9af0, cid 3, qid 0 00:20:03.566 [2024-05-14 02:17:17.908120] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.566 [2024-05-14 02:17:17.908127] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.566 [2024-05-14 02:17:17.908131] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.566 [2024-05-14 02:17:17.908136] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9af0) on tqpair=0x98a270 00:20:03.566 [2024-05-14 02:17:17.908147] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.566 [2024-05-14 02:17:17.908152] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.566 [2024-05-14 02:17:17.908156] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98a270) 00:20:03.566 [2024-05-14 02:17:17.908164] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.566 [2024-05-14 02:17:17.908183] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9af0, cid 3, qid 0 00:20:03.566 [2024-05-14 02:17:17.908241] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.566 [2024-05-14 02:17:17.908248] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.566 [2024-05-14 02:17:17.908252] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.566 [2024-05-14 02:17:17.908256] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9af0) on tqpair=0x98a270 00:20:03.566 [2024-05-14 02:17:17.908267] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.566 [2024-05-14 02:17:17.908273] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.566 [2024-05-14 02:17:17.908277] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98a270) 00:20:03.566 [2024-05-14 02:17:17.908285] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.566 [2024-05-14 02:17:17.908304] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9af0, cid 3, qid 0 00:20:03.566 [2024-05-14 02:17:17.908358] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.566 [2024-05-14 02:17:17.908365] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.566 [2024-05-14 02:17:17.908370] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.566 [2024-05-14 02:17:17.908374] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9af0) on tqpair=0x98a270 00:20:03.566 [2024-05-14 02:17:17.908385] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.566 [2024-05-14 02:17:17.908390] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.566 [2024-05-14 02:17:17.908395] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98a270) 00:20:03.566 [2024-05-14 02:17:17.908402] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.566 [2024-05-14 02:17:17.908422] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9af0, cid 3, qid 0 00:20:03.566 [2024-05-14 02:17:17.908479] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.566 [2024-05-14 02:17:17.908487] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.566 [2024-05-14 02:17:17.908491] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.566 [2024-05-14 02:17:17.908495] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9af0) on tqpair=0x98a270 00:20:03.566 [2024-05-14 02:17:17.908506] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.566 [2024-05-14 02:17:17.908511] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.566 [2024-05-14 02:17:17.908516] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98a270) 00:20:03.566 [2024-05-14 02:17:17.908523] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.566 [2024-05-14 02:17:17.908543] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9af0, cid 3, qid 0 00:20:03.566 [2024-05-14 02:17:17.908594] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.566 [2024-05-14 02:17:17.908601] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.566 [2024-05-14 02:17:17.908605] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.566 [2024-05-14 02:17:17.908610] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9af0) on tqpair=0x98a270 00:20:03.566 [2024-05-14 02:17:17.908621] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.566 [2024-05-14 02:17:17.908626] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.566 [2024-05-14 02:17:17.908630] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98a270) 00:20:03.566 [2024-05-14 02:17:17.908638] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.566 [2024-05-14 02:17:17.908657] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9af0, cid 3, qid 0 00:20:03.566 [2024-05-14 02:17:17.908712] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.566 [2024-05-14 02:17:17.908719] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.566 [2024-05-14 02:17:17.908723] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.566 [2024-05-14 02:17:17.908728] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9af0) on tqpair=0x98a270 00:20:03.566 [2024-05-14 02:17:17.908739] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.566 [2024-05-14 02:17:17.908744] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.566 [2024-05-14 02:17:17.908748] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98a270) 00:20:03.566 [2024-05-14 02:17:17.908756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.566 [2024-05-14 02:17:17.908788] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9af0, cid 3, qid 0 00:20:03.566 [2024-05-14 02:17:17.908844] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.566 [2024-05-14 02:17:17.908853] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.566 [2024-05-14 02:17:17.908857] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.566 [2024-05-14 02:17:17.908861] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9af0) on tqpair=0x98a270 00:20:03.566 [2024-05-14 02:17:17.908873] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.566 [2024-05-14 02:17:17.908878] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.566 [2024-05-14 02:17:17.908882] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98a270) 00:20:03.566 [2024-05-14 02:17:17.908890] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.566 [2024-05-14 02:17:17.908910] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9af0, cid 3, qid 0 00:20:03.566 [2024-05-14 02:17:17.908971] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.566 [2024-05-14 02:17:17.908978] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.566 [2024-05-14 02:17:17.908983] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.566 [2024-05-14 02:17:17.908987] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9af0) on tqpair=0x98a270 00:20:03.566 [2024-05-14 02:17:17.908998] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.566 [2024-05-14 02:17:17.909003] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.566 [2024-05-14 02:17:17.909007] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98a270) 00:20:03.566 [2024-05-14 02:17:17.909015] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.566 [2024-05-14 02:17:17.909035] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9af0, cid 3, qid 0 00:20:03.566 [2024-05-14 02:17:17.909090] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.566 [2024-05-14 02:17:17.909102] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.566 [2024-05-14 02:17:17.909107] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.566 [2024-05-14 02:17:17.909111] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9af0) on tqpair=0x98a270 00:20:03.566 [2024-05-14 02:17:17.909122] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.566 [2024-05-14 02:17:17.909128] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.566 [2024-05-14 02:17:17.909132] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98a270) 00:20:03.566 [2024-05-14 02:17:17.909140] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.566 [2024-05-14 02:17:17.909160] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9af0, cid 3, qid 0 00:20:03.566 [2024-05-14 02:17:17.909212] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.567 [2024-05-14 02:17:17.909220] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.567 [2024-05-14 02:17:17.909224] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.567 [2024-05-14 02:17:17.909228] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9af0) on tqpair=0x98a270 00:20:03.567 [2024-05-14 02:17:17.909239] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.567 [2024-05-14 02:17:17.909244] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.567 [2024-05-14 02:17:17.909249] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98a270) 00:20:03.567 [2024-05-14 02:17:17.909256] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.567 [2024-05-14 02:17:17.909276] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9af0, cid 3, qid 0 00:20:03.567 [2024-05-14 02:17:17.909331] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.567 [2024-05-14 02:17:17.909342] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.567 [2024-05-14 02:17:17.909347] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.567 [2024-05-14 02:17:17.909351] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9af0) on tqpair=0x98a270 00:20:03.567 [2024-05-14 02:17:17.909363] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.567 [2024-05-14 02:17:17.909368] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.567 [2024-05-14 02:17:17.909372] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98a270) 00:20:03.567 [2024-05-14 02:17:17.909380] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.567 [2024-05-14 02:17:17.909400] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9af0, cid 3, qid 0 00:20:03.567 [2024-05-14 02:17:17.909452] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.567 [2024-05-14 02:17:17.909459] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.567 [2024-05-14 02:17:17.909463] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.567 [2024-05-14 02:17:17.909468] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9af0) on tqpair=0x98a270 00:20:03.567 [2024-05-14 02:17:17.909479] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.567 [2024-05-14 02:17:17.909484] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.567 [2024-05-14 02:17:17.909488] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98a270) 00:20:03.567 [2024-05-14 02:17:17.909496] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.567 [2024-05-14 02:17:17.909515] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9af0, cid 3, qid 0 00:20:03.567 [2024-05-14 02:17:17.909567] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.567 [2024-05-14 02:17:17.909575] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.567 [2024-05-14 02:17:17.909579] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.567 [2024-05-14 02:17:17.909583] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9af0) on tqpair=0x98a270 00:20:03.567 [2024-05-14 02:17:17.909594] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.567 [2024-05-14 02:17:17.909600] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.567 [2024-05-14 02:17:17.909604] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98a270) 00:20:03.567 [2024-05-14 02:17:17.909612] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.567 [2024-05-14 02:17:17.909631] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9af0, cid 3, qid 0 00:20:03.567 [2024-05-14 02:17:17.909689] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.567 [2024-05-14 02:17:17.909696] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.567 [2024-05-14 02:17:17.909700] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.567 [2024-05-14 02:17:17.909705] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9af0) on tqpair=0x98a270 00:20:03.567 [2024-05-14 02:17:17.909720] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.567 [2024-05-14 02:17:17.909726] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.567 [2024-05-14 02:17:17.909730] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98a270) 00:20:03.567 [2024-05-14 02:17:17.909738] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.567 [2024-05-14 02:17:17.909757] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9af0, cid 3, qid 0 00:20:03.567 [2024-05-14 02:17:17.909840] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.567 [2024-05-14 02:17:17.909857] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.567 [2024-05-14 02:17:17.909862] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.567 [2024-05-14 02:17:17.909867] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9af0) on tqpair=0x98a270 00:20:03.567 [2024-05-14 02:17:17.909879] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.567 [2024-05-14 02:17:17.909884] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.567 [2024-05-14 02:17:17.909889] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98a270) 00:20:03.567 [2024-05-14 02:17:17.909897] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.567 [2024-05-14 02:17:17.909920] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9af0, cid 3, qid 0 00:20:03.567 [2024-05-14 02:17:17.909982] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.567 [2024-05-14 02:17:17.909989] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.567 [2024-05-14 02:17:17.909993] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.567 [2024-05-14 02:17:17.909998] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9af0) on tqpair=0x98a270 00:20:03.567 [2024-05-14 02:17:17.910009] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.567 [2024-05-14 02:17:17.910014] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.567 [2024-05-14 02:17:17.910018] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98a270) 00:20:03.567 [2024-05-14 02:17:17.910026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.567 [2024-05-14 02:17:17.910046] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9af0, cid 3, qid 0 00:20:03.567 [2024-05-14 02:17:17.910101] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.567 [2024-05-14 02:17:17.910113] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.567 [2024-05-14 02:17:17.910118] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.567 [2024-05-14 02:17:17.910122] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9af0) on tqpair=0x98a270 00:20:03.567 [2024-05-14 02:17:17.910133] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.567 [2024-05-14 02:17:17.910139] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.567 [2024-05-14 02:17:17.910143] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98a270) 00:20:03.567 [2024-05-14 02:17:17.910151] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.567 [2024-05-14 02:17:17.910171] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9af0, cid 3, qid 0 00:20:03.567 [2024-05-14 02:17:17.910226] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.567 [2024-05-14 02:17:17.910245] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.567 [2024-05-14 02:17:17.910249] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.567 [2024-05-14 02:17:17.910253] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9af0) on tqpair=0x98a270 00:20:03.567 [2024-05-14 02:17:17.910264] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.567 [2024-05-14 02:17:17.910270] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.567 [2024-05-14 02:17:17.910274] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98a270) 00:20:03.567 [2024-05-14 02:17:17.910281] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.567 [2024-05-14 02:17:17.910301] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9af0, cid 3, qid 0 00:20:03.567 [2024-05-14 02:17:17.910355] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.567 [2024-05-14 02:17:17.910363] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.567 [2024-05-14 02:17:17.910367] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.567 [2024-05-14 02:17:17.910371] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9af0) on tqpair=0x98a270 00:20:03.567 [2024-05-14 02:17:17.910382] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.567 [2024-05-14 02:17:17.910387] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.567 [2024-05-14 02:17:17.910391] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98a270) 00:20:03.567 [2024-05-14 02:17:17.910399] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.567 [2024-05-14 02:17:17.910418] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9af0, cid 3, qid 0 00:20:03.567 [2024-05-14 02:17:17.910476] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.567 [2024-05-14 02:17:17.910484] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.567 [2024-05-14 02:17:17.910489] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.567 [2024-05-14 02:17:17.910493] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9af0) on tqpair=0x98a270 00:20:03.567 [2024-05-14 02:17:17.910504] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.567 [2024-05-14 02:17:17.910509] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.567 [2024-05-14 02:17:17.910514] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98a270) 00:20:03.567 [2024-05-14 02:17:17.910522] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.567 [2024-05-14 02:17:17.910541] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9af0, cid 3, qid 0 00:20:03.567 [2024-05-14 02:17:17.910600] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.567 [2024-05-14 02:17:17.910607] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.567 [2024-05-14 02:17:17.910611] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.567 [2024-05-14 02:17:17.910616] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9af0) on tqpair=0x98a270 00:20:03.567 [2024-05-14 02:17:17.910626] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.567 [2024-05-14 02:17:17.910632] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.567 [2024-05-14 02:17:17.910636] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98a270) 00:20:03.567 [2024-05-14 02:17:17.910644] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.567 [2024-05-14 02:17:17.910663] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9af0, cid 3, qid 0 00:20:03.567 [2024-05-14 02:17:17.910718] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.567 [2024-05-14 02:17:17.910730] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.567 [2024-05-14 02:17:17.910734] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.567 [2024-05-14 02:17:17.910739] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9af0) on tqpair=0x98a270 00:20:03.568 [2024-05-14 02:17:17.910750] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.568 [2024-05-14 02:17:17.910755] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.568 [2024-05-14 02:17:17.910760] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98a270) 00:20:03.568 [2024-05-14 02:17:17.910779] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.568 [2024-05-14 02:17:17.910801] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9af0, cid 3, qid 0 00:20:03.568 [2024-05-14 02:17:17.910858] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.568 [2024-05-14 02:17:17.910865] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.568 [2024-05-14 02:17:17.910869] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.568 [2024-05-14 02:17:17.910874] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9af0) on tqpair=0x98a270 00:20:03.568 [2024-05-14 02:17:17.910885] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.568 [2024-05-14 02:17:17.910890] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.568 [2024-05-14 02:17:17.910895] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98a270) 00:20:03.568 [2024-05-14 02:17:17.910903] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.568 [2024-05-14 02:17:17.910922] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9af0, cid 3, qid 0 00:20:03.568 [2024-05-14 02:17:17.910977] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.568 [2024-05-14 02:17:17.910984] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.568 [2024-05-14 02:17:17.910988] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.568 [2024-05-14 02:17:17.910993] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9af0) on tqpair=0x98a270 00:20:03.568 [2024-05-14 02:17:17.911004] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.568 [2024-05-14 02:17:17.911009] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.568 [2024-05-14 02:17:17.911014] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98a270) 00:20:03.568 [2024-05-14 02:17:17.911022] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.568 [2024-05-14 02:17:17.911041] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9af0, cid 3, qid 0 00:20:03.568 [2024-05-14 02:17:17.911093] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.568 [2024-05-14 02:17:17.911100] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.568 [2024-05-14 02:17:17.911104] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.568 [2024-05-14 02:17:17.911109] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9af0) on tqpair=0x98a270 00:20:03.568 [2024-05-14 02:17:17.911120] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.568 [2024-05-14 02:17:17.911125] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.568 [2024-05-14 02:17:17.911129] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98a270) 00:20:03.568 [2024-05-14 02:17:17.911137] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.568 [2024-05-14 02:17:17.911156] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9af0, cid 3, qid 0 00:20:03.568 [2024-05-14 02:17:17.911214] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.568 [2024-05-14 02:17:17.911222] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.568 [2024-05-14 02:17:17.911226] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.568 [2024-05-14 02:17:17.911230] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9af0) on tqpair=0x98a270 00:20:03.568 [2024-05-14 02:17:17.911241] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.568 [2024-05-14 02:17:17.911246] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.568 [2024-05-14 02:17:17.911251] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98a270) 00:20:03.568 [2024-05-14 02:17:17.911258] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.568 [2024-05-14 02:17:17.911278] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9af0, cid 3, qid 0 00:20:03.568 [2024-05-14 02:17:17.911333] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.568 [2024-05-14 02:17:17.911345] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.568 [2024-05-14 02:17:17.911349] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.568 [2024-05-14 02:17:17.911354] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9af0) on tqpair=0x98a270 00:20:03.568 [2024-05-14 02:17:17.911365] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.568 [2024-05-14 02:17:17.911371] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.568 [2024-05-14 02:17:17.911375] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98a270) 00:20:03.568 [2024-05-14 02:17:17.911383] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.568 [2024-05-14 02:17:17.911403] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9af0, cid 3, qid 0 00:20:03.568 [2024-05-14 02:17:17.911455] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.568 [2024-05-14 02:17:17.911462] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.568 [2024-05-14 02:17:17.911466] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.568 [2024-05-14 02:17:17.911471] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9af0) on tqpair=0x98a270 00:20:03.568 [2024-05-14 02:17:17.911482] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.568 [2024-05-14 02:17:17.911487] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.568 [2024-05-14 02:17:17.911491] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98a270) 00:20:03.568 [2024-05-14 02:17:17.911499] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.568 [2024-05-14 02:17:17.911519] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9af0, cid 3, qid 0 00:20:03.568 [2024-05-14 02:17:17.911574] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.568 [2024-05-14 02:17:17.911581] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.568 [2024-05-14 02:17:17.911586] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.568 [2024-05-14 02:17:17.911590] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9af0) on tqpair=0x98a270 00:20:03.568 [2024-05-14 02:17:17.911601] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.568 [2024-05-14 02:17:17.911606] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.568 [2024-05-14 02:17:17.911610] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98a270) 00:20:03.568 [2024-05-14 02:17:17.911618] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.568 [2024-05-14 02:17:17.911638] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9af0, cid 3, qid 0 00:20:03.568 [2024-05-14 02:17:17.911695] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.568 [2024-05-14 02:17:17.911703] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.568 [2024-05-14 02:17:17.911707] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.568 [2024-05-14 02:17:17.911711] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9af0) on tqpair=0x98a270 00:20:03.568 [2024-05-14 02:17:17.911723] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.568 [2024-05-14 02:17:17.911728] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.568 [2024-05-14 02:17:17.911732] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98a270) 00:20:03.568 [2024-05-14 02:17:17.911740] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.568 [2024-05-14 02:17:17.911759] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9af0, cid 3, qid 0 00:20:03.568 [2024-05-14 02:17:17.911832] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.568 [2024-05-14 02:17:17.911840] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.568 [2024-05-14 02:17:17.911845] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.568 [2024-05-14 02:17:17.911849] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9af0) on tqpair=0x98a270 00:20:03.568 [2024-05-14 02:17:17.911861] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.568 [2024-05-14 02:17:17.911866] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.568 [2024-05-14 02:17:17.911870] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98a270) 00:20:03.568 [2024-05-14 02:17:17.911878] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.568 [2024-05-14 02:17:17.911900] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9af0, cid 3, qid 0 00:20:03.568 [2024-05-14 02:17:17.911958] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.568 [2024-05-14 02:17:17.911965] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.568 [2024-05-14 02:17:17.911970] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.568 [2024-05-14 02:17:17.911974] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9af0) on tqpair=0x98a270 00:20:03.568 [2024-05-14 02:17:17.911985] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.568 [2024-05-14 02:17:17.911990] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.568 [2024-05-14 02:17:17.911994] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98a270) 00:20:03.569 [2024-05-14 02:17:17.912002] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.569 [2024-05-14 02:17:17.912022] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9af0, cid 3, qid 0 00:20:03.569 [2024-05-14 02:17:17.912073] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.569 [2024-05-14 02:17:17.912081] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.569 [2024-05-14 02:17:17.912085] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.569 [2024-05-14 02:17:17.912090] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9af0) on tqpair=0x98a270 00:20:03.569 [2024-05-14 02:17:17.912101] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.569 [2024-05-14 02:17:17.912106] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.569 [2024-05-14 02:17:17.912110] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98a270) 00:20:03.569 [2024-05-14 02:17:17.912118] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.569 [2024-05-14 02:17:17.912137] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9af0, cid 3, qid 0 00:20:03.569 [2024-05-14 02:17:17.912195] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.569 [2024-05-14 02:17:17.912202] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.569 [2024-05-14 02:17:17.912206] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.569 [2024-05-14 02:17:17.912211] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9af0) on tqpair=0x98a270 00:20:03.569 [2024-05-14 02:17:17.912222] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.569 [2024-05-14 02:17:17.912227] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.569 [2024-05-14 02:17:17.912231] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98a270) 00:20:03.569 [2024-05-14 02:17:17.912239] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.569 [2024-05-14 02:17:17.912259] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9af0, cid 3, qid 0 00:20:03.569 [2024-05-14 02:17:17.912317] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.569 [2024-05-14 02:17:17.912328] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.569 [2024-05-14 02:17:17.912333] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.569 [2024-05-14 02:17:17.912338] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9af0) on tqpair=0x98a270 00:20:03.569 [2024-05-14 02:17:17.912349] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.569 [2024-05-14 02:17:17.912354] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.569 [2024-05-14 02:17:17.912359] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98a270) 00:20:03.569 [2024-05-14 02:17:17.912367] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.569 [2024-05-14 02:17:17.912387] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9af0, cid 3, qid 0 00:20:03.569 [2024-05-14 02:17:17.912439] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.569 [2024-05-14 02:17:17.912447] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.569 [2024-05-14 02:17:17.912451] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.569 [2024-05-14 02:17:17.912455] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9af0) on tqpair=0x98a270 00:20:03.569 [2024-05-14 02:17:17.912466] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.569 [2024-05-14 02:17:17.912471] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.569 [2024-05-14 02:17:17.912476] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98a270) 00:20:03.569 [2024-05-14 02:17:17.912483] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.569 [2024-05-14 02:17:17.912503] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9af0, cid 3, qid 0 00:20:03.569 [2024-05-14 02:17:17.912558] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.569 [2024-05-14 02:17:17.912570] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.569 [2024-05-14 02:17:17.912575] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.569 [2024-05-14 02:17:17.912579] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9af0) on tqpair=0x98a270 00:20:03.569 [2024-05-14 02:17:17.912591] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.569 [2024-05-14 02:17:17.912596] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.569 [2024-05-14 02:17:17.912600] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98a270) 00:20:03.569 [2024-05-14 02:17:17.912608] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.569 [2024-05-14 02:17:17.912628] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9af0, cid 3, qid 0 00:20:03.569 [2024-05-14 02:17:17.912689] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.569 [2024-05-14 02:17:17.912700] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.569 [2024-05-14 02:17:17.912705] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.569 [2024-05-14 02:17:17.912709] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9af0) on tqpair=0x98a270 00:20:03.569 [2024-05-14 02:17:17.912720] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.569 [2024-05-14 02:17:17.912726] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.569 [2024-05-14 02:17:17.912730] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98a270) 00:20:03.569 [2024-05-14 02:17:17.912738] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.569 [2024-05-14 02:17:17.912758] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9af0, cid 3, qid 0 00:20:03.569 [2024-05-14 02:17:17.916791] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.569 [2024-05-14 02:17:17.916803] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.569 [2024-05-14 02:17:17.916807] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.569 [2024-05-14 02:17:17.916812] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9af0) on tqpair=0x98a270 00:20:03.569 [2024-05-14 02:17:17.916827] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.569 [2024-05-14 02:17:17.916833] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.569 [2024-05-14 02:17:17.916837] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98a270) 00:20:03.569 [2024-05-14 02:17:17.916846] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.569 [2024-05-14 02:17:17.916874] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9af0, cid 3, qid 0 00:20:03.569 [2024-05-14 02:17:17.916947] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.569 [2024-05-14 02:17:17.916955] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.569 [2024-05-14 02:17:17.916959] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.569 [2024-05-14 02:17:17.916963] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9c9af0) on tqpair=0x98a270 00:20:03.569 [2024-05-14 02:17:17.916972] nvme_ctrlr.c:1191:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 11 milliseconds 00:20:03.569 0 Kelvin (-273 Celsius) 00:20:03.569 Available Spare: 0% 00:20:03.569 Available Spare Threshold: 0% 00:20:03.569 Life Percentage Used: 0% 00:20:03.569 Data Units Read: 0 00:20:03.569 Data Units Written: 0 00:20:03.569 Host Read Commands: 0 00:20:03.569 Host Write Commands: 0 00:20:03.569 Controller Busy Time: 0 minutes 00:20:03.569 Power Cycles: 0 00:20:03.569 Power On Hours: 0 hours 00:20:03.569 Unsafe Shutdowns: 0 00:20:03.569 Unrecoverable Media Errors: 0 00:20:03.569 Lifetime Error Log Entries: 0 00:20:03.569 Warning Temperature Time: 0 minutes 00:20:03.569 Critical Temperature Time: 0 minutes 00:20:03.569 00:20:03.569 Number of Queues 00:20:03.569 ================ 00:20:03.569 Number of I/O Submission Queues: 127 00:20:03.569 Number of I/O Completion Queues: 127 00:20:03.569 00:20:03.569 Active Namespaces 00:20:03.569 ================= 00:20:03.569 Namespace ID:1 00:20:03.569 Error Recovery Timeout: Unlimited 00:20:03.569 Command Set Identifier: NVM (00h) 00:20:03.569 Deallocate: Supported 00:20:03.569 Deallocated/Unwritten Error: Not Supported 00:20:03.569 Deallocated Read Value: Unknown 00:20:03.569 Deallocate in Write Zeroes: Not Supported 00:20:03.569 Deallocated Guard Field: 0xFFFF 00:20:03.569 Flush: Supported 00:20:03.569 Reservation: Supported 00:20:03.569 Namespace Sharing Capabilities: Multiple Controllers 00:20:03.569 Size (in LBAs): 131072 (0GiB) 00:20:03.569 Capacity (in LBAs): 131072 (0GiB) 00:20:03.569 Utilization (in LBAs): 131072 (0GiB) 00:20:03.569 NGUID: ABCDEF0123456789ABCDEF0123456789 00:20:03.569 EUI64: ABCDEF0123456789 00:20:03.569 UUID: 7a64f922-fe5c-46ee-9ecb-b36ac3603496 00:20:03.569 Thin Provisioning: Not Supported 00:20:03.569 Per-NS Atomic Units: Yes 00:20:03.569 Atomic Boundary Size (Normal): 0 00:20:03.569 Atomic Boundary Size (PFail): 0 00:20:03.569 Atomic Boundary Offset: 0 00:20:03.569 Maximum Single Source Range Length: 65535 00:20:03.569 Maximum Copy Length: 65535 00:20:03.569 Maximum Source Range Count: 1 00:20:03.569 NGUID/EUI64 Never Reused: No 00:20:03.569 Namespace Write Protected: No 00:20:03.569 Number of LBA Formats: 1 00:20:03.569 Current LBA Format: LBA Format #00 00:20:03.569 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:03.569 00:20:03.569 02:17:17 -- host/identify.sh@51 -- # sync 00:20:03.569 02:17:17 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:03.569 02:17:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:03.569 02:17:17 -- common/autotest_common.sh@10 -- # set +x 00:20:03.569 02:17:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:03.569 02:17:17 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:20:03.569 02:17:17 -- host/identify.sh@56 -- # nvmftestfini 00:20:03.569 02:17:17 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:03.569 02:17:17 -- nvmf/common.sh@116 -- # sync 00:20:03.569 02:17:17 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:03.569 02:17:17 -- nvmf/common.sh@119 -- # set +e 00:20:03.569 02:17:17 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:03.569 02:17:17 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:03.569 rmmod nvme_tcp 00:20:03.569 rmmod nvme_fabrics 00:20:03.569 rmmod nvme_keyring 00:20:03.570 02:17:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:03.570 02:17:18 -- nvmf/common.sh@123 -- # set -e 00:20:03.570 02:17:18 -- nvmf/common.sh@124 -- # return 0 00:20:03.570 02:17:18 -- nvmf/common.sh@477 -- # '[' -n 80853 ']' 00:20:03.570 02:17:18 -- nvmf/common.sh@478 -- # killprocess 80853 00:20:03.570 02:17:18 -- common/autotest_common.sh@926 -- # '[' -z 80853 ']' 00:20:03.570 02:17:18 -- common/autotest_common.sh@930 -- # kill -0 80853 00:20:03.570 02:17:18 -- common/autotest_common.sh@931 -- # uname 00:20:03.570 02:17:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:03.570 02:17:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 80853 00:20:03.570 02:17:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:03.570 02:17:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:03.570 02:17:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 80853' 00:20:03.570 killing process with pid 80853 00:20:03.570 02:17:18 -- common/autotest_common.sh@945 -- # kill 80853 00:20:03.570 [2024-05-14 02:17:18.075772] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:20:03.570 02:17:18 -- common/autotest_common.sh@950 -- # wait 80853 00:20:03.828 02:17:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:03.828 02:17:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:03.828 02:17:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:03.828 02:17:18 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:03.828 02:17:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:03.828 02:17:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:03.828 02:17:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:03.828 02:17:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:03.828 02:17:18 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:03.828 00:20:03.828 real 0m2.581s 00:20:03.828 user 0m7.514s 00:20:03.828 sys 0m0.574s 00:20:03.828 02:17:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:03.828 02:17:18 -- common/autotest_common.sh@10 -- # set +x 00:20:03.828 ************************************ 00:20:03.828 END TEST nvmf_identify 00:20:03.828 ************************************ 00:20:03.828 02:17:18 -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:03.828 02:17:18 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:03.828 02:17:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:03.828 02:17:18 -- common/autotest_common.sh@10 -- # set +x 00:20:03.828 ************************************ 00:20:03.828 START TEST nvmf_perf 00:20:03.828 ************************************ 00:20:03.828 02:17:18 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:04.087 * Looking for test storage... 00:20:04.087 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:04.087 02:17:18 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:04.088 02:17:18 -- nvmf/common.sh@7 -- # uname -s 00:20:04.088 02:17:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:04.088 02:17:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:04.088 02:17:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:04.088 02:17:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:04.088 02:17:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:04.088 02:17:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:04.088 02:17:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:04.088 02:17:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:04.088 02:17:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:04.088 02:17:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:04.088 02:17:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:20:04.088 02:17:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:20:04.088 02:17:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:04.088 02:17:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:04.088 02:17:18 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:04.088 02:17:18 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:04.088 02:17:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:04.088 02:17:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:04.088 02:17:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:04.088 02:17:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.088 02:17:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.088 02:17:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.088 02:17:18 -- paths/export.sh@5 -- # export PATH 00:20:04.088 02:17:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.088 02:17:18 -- nvmf/common.sh@46 -- # : 0 00:20:04.088 02:17:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:04.088 02:17:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:04.088 02:17:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:04.088 02:17:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:04.088 02:17:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:04.088 02:17:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:04.088 02:17:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:04.088 02:17:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:04.088 02:17:18 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:04.088 02:17:18 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:04.088 02:17:18 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:04.088 02:17:18 -- host/perf.sh@17 -- # nvmftestinit 00:20:04.088 02:17:18 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:04.088 02:17:18 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:04.088 02:17:18 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:04.088 02:17:18 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:04.088 02:17:18 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:04.088 02:17:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:04.088 02:17:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:04.088 02:17:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:04.088 02:17:18 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:04.088 02:17:18 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:04.088 02:17:18 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:04.088 02:17:18 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:04.088 02:17:18 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:04.088 02:17:18 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:04.088 02:17:18 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:04.088 02:17:18 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:04.088 02:17:18 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:04.088 02:17:18 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:04.088 02:17:18 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:04.088 02:17:18 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:04.088 02:17:18 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:04.088 02:17:18 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:04.088 02:17:18 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:04.088 02:17:18 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:04.088 02:17:18 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:04.088 02:17:18 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:04.088 02:17:18 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:04.088 02:17:18 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:04.088 Cannot find device "nvmf_tgt_br" 00:20:04.088 02:17:18 -- nvmf/common.sh@154 -- # true 00:20:04.088 02:17:18 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:04.088 Cannot find device "nvmf_tgt_br2" 00:20:04.088 02:17:18 -- nvmf/common.sh@155 -- # true 00:20:04.088 02:17:18 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:04.088 02:17:18 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:04.088 Cannot find device "nvmf_tgt_br" 00:20:04.088 02:17:18 -- nvmf/common.sh@157 -- # true 00:20:04.088 02:17:18 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:04.088 Cannot find device "nvmf_tgt_br2" 00:20:04.088 02:17:18 -- nvmf/common.sh@158 -- # true 00:20:04.088 02:17:18 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:04.088 02:17:18 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:04.088 02:17:18 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:04.088 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:04.088 02:17:18 -- nvmf/common.sh@161 -- # true 00:20:04.088 02:17:18 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:04.088 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:04.088 02:17:18 -- nvmf/common.sh@162 -- # true 00:20:04.088 02:17:18 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:04.088 02:17:18 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:04.088 02:17:18 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:04.088 02:17:18 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:04.347 02:17:18 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:04.347 02:17:18 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:04.347 02:17:18 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:04.347 02:17:18 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:04.347 02:17:18 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:04.347 02:17:18 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:04.347 02:17:18 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:04.347 02:17:18 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:04.347 02:17:18 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:04.347 02:17:18 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:04.347 02:17:18 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:04.347 02:17:18 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:04.347 02:17:18 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:04.347 02:17:18 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:04.347 02:17:18 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:04.347 02:17:18 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:04.347 02:17:18 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:04.347 02:17:18 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:04.347 02:17:18 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:04.347 02:17:18 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:04.347 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:04.347 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:20:04.347 00:20:04.347 --- 10.0.0.2 ping statistics --- 00:20:04.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:04.347 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:20:04.347 02:17:18 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:04.347 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:04.347 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:20:04.347 00:20:04.347 --- 10.0.0.3 ping statistics --- 00:20:04.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:04.347 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:20:04.347 02:17:18 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:04.347 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:04.347 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.054 ms 00:20:04.347 00:20:04.347 --- 10.0.0.1 ping statistics --- 00:20:04.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:04.347 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:20:04.347 02:17:18 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:04.347 02:17:18 -- nvmf/common.sh@421 -- # return 0 00:20:04.347 02:17:18 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:04.347 02:17:18 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:04.347 02:17:18 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:04.347 02:17:18 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:04.347 02:17:18 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:04.347 02:17:18 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:04.347 02:17:18 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:04.347 02:17:18 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:20:04.347 02:17:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:04.347 02:17:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:04.347 02:17:18 -- common/autotest_common.sh@10 -- # set +x 00:20:04.347 02:17:18 -- nvmf/common.sh@469 -- # nvmfpid=81080 00:20:04.347 02:17:18 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:04.347 02:17:18 -- nvmf/common.sh@470 -- # waitforlisten 81080 00:20:04.348 02:17:18 -- common/autotest_common.sh@819 -- # '[' -z 81080 ']' 00:20:04.348 02:17:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:04.348 02:17:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:04.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:04.348 02:17:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:04.348 02:17:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:04.348 02:17:18 -- common/autotest_common.sh@10 -- # set +x 00:20:04.348 [2024-05-14 02:17:18.931901] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:20:04.348 [2024-05-14 02:17:18.931993] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:04.606 [2024-05-14 02:17:19.075911] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:04.606 [2024-05-14 02:17:19.148075] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:04.606 [2024-05-14 02:17:19.148263] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:04.606 [2024-05-14 02:17:19.148293] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:04.606 [2024-05-14 02:17:19.148309] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:04.606 [2024-05-14 02:17:19.148426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:04.606 [2024-05-14 02:17:19.148741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:04.606 [2024-05-14 02:17:19.148875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:04.606 [2024-05-14 02:17:19.148881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:05.542 02:17:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:05.542 02:17:19 -- common/autotest_common.sh@852 -- # return 0 00:20:05.542 02:17:19 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:05.542 02:17:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:05.542 02:17:19 -- common/autotest_common.sh@10 -- # set +x 00:20:05.542 02:17:19 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:05.542 02:17:19 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:05.542 02:17:19 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:20:05.800 02:17:20 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:20:05.800 02:17:20 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:20:06.369 02:17:20 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:06.0 00:20:06.369 02:17:20 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:06.628 02:17:20 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:20:06.628 02:17:20 -- host/perf.sh@33 -- # '[' -n 0000:00:06.0 ']' 00:20:06.628 02:17:20 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:20:06.628 02:17:20 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:20:06.628 02:17:20 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:06.887 [2024-05-14 02:17:21.247589] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:06.887 02:17:21 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:07.146 02:17:21 -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:07.146 02:17:21 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:07.146 02:17:21 -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:07.146 02:17:21 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:20:07.404 02:17:21 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:07.663 [2024-05-14 02:17:22.212949] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:07.663 02:17:22 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:07.921 02:17:22 -- host/perf.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:20:07.921 02:17:22 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:20:07.921 02:17:22 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:20:07.921 02:17:22 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:20:09.299 Initializing NVMe Controllers 00:20:09.299 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:20:09.299 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:20:09.299 Initialization complete. Launching workers. 00:20:09.299 ======================================================== 00:20:09.299 Latency(us) 00:20:09.299 Device Information : IOPS MiB/s Average min max 00:20:09.299 PCIE (0000:00:06.0) NSID 1 from core 0: 24419.10 95.39 1310.37 372.82 7978.96 00:20:09.299 ======================================================== 00:20:09.299 Total : 24419.10 95.39 1310.37 372.82 7978.96 00:20:09.299 00:20:09.299 02:17:23 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:10.675 Initializing NVMe Controllers 00:20:10.675 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:10.675 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:10.675 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:10.675 Initialization complete. Launching workers. 00:20:10.675 ======================================================== 00:20:10.675 Latency(us) 00:20:10.675 Device Information : IOPS MiB/s Average min max 00:20:10.675 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3551.19 13.87 281.28 106.05 8163.45 00:20:10.675 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 121.63 0.48 8278.42 7012.59 15999.09 00:20:10.675 ======================================================== 00:20:10.675 Total : 3672.82 14.35 546.12 106.05 15999.09 00:20:10.675 00:20:10.675 02:17:24 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:11.610 [2024-05-14 02:17:26.180801] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178ab40 is same with the state(5) to be set 00:20:11.610 [2024-05-14 02:17:26.180878] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178ab40 is same with the state(5) to be set 00:20:11.610 [2024-05-14 02:17:26.180906] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178ab40 is same with the state(5) to be set 00:20:11.610 [2024-05-14 02:17:26.180915] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178ab40 is same with the state(5) to be set 00:20:11.610 [2024-05-14 02:17:26.180923] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178ab40 is same with the state(5) to be set 00:20:11.610 [2024-05-14 02:17:26.180932] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178ab40 is same with the state(5) to be set 00:20:11.610 [2024-05-14 02:17:26.180940] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178ab40 is same with the state(5) to be set 00:20:11.611 [2024-05-14 02:17:26.180948] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178ab40 is same with the state(5) to be set 00:20:11.611 [2024-05-14 02:17:26.180956] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178ab40 is same with the state(5) to be set 00:20:11.869 Initializing NVMe Controllers 00:20:11.869 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:11.869 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:11.869 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:11.869 Initialization complete. Launching workers. 00:20:11.869 ======================================================== 00:20:11.869 Latency(us) 00:20:11.869 Device Information : IOPS MiB/s Average min max 00:20:11.869 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9275.22 36.23 3451.18 644.79 8264.96 00:20:11.869 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2678.77 10.46 12026.10 5966.52 23726.73 00:20:11.869 ======================================================== 00:20:11.869 Total : 11954.00 46.70 5372.74 644.79 23726.73 00:20:11.869 00:20:11.869 02:17:26 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:20:11.869 02:17:26 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:14.411 Initializing NVMe Controllers 00:20:14.411 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:14.411 Controller IO queue size 128, less than required. 00:20:14.411 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:14.411 Controller IO queue size 128, less than required. 00:20:14.411 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:14.411 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:14.411 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:14.411 Initialization complete. Launching workers. 00:20:14.411 ======================================================== 00:20:14.411 Latency(us) 00:20:14.411 Device Information : IOPS MiB/s Average min max 00:20:14.411 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1823.86 455.97 71197.16 47642.08 114838.66 00:20:14.411 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 578.30 144.57 227395.36 90382.79 324443.61 00:20:14.411 ======================================================== 00:20:14.411 Total : 2402.16 600.54 108800.43 47642.08 324443.61 00:20:14.411 00:20:14.411 02:17:28 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:20:14.669 No valid NVMe controllers or AIO or URING devices found 00:20:14.669 Initializing NVMe Controllers 00:20:14.669 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:14.669 Controller IO queue size 128, less than required. 00:20:14.669 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:14.669 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:20:14.669 Controller IO queue size 128, less than required. 00:20:14.669 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:14.669 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:20:14.669 WARNING: Some requested NVMe devices were skipped 00:20:14.669 02:17:29 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:20:17.198 Initializing NVMe Controllers 00:20:17.198 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:17.198 Controller IO queue size 128, less than required. 00:20:17.198 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:17.198 Controller IO queue size 128, less than required. 00:20:17.198 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:17.198 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:17.198 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:17.198 Initialization complete. Launching workers. 00:20:17.198 00:20:17.198 ==================== 00:20:17.198 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:20:17.198 TCP transport: 00:20:17.198 polls: 9252 00:20:17.198 idle_polls: 6233 00:20:17.198 sock_completions: 3019 00:20:17.198 nvme_completions: 5842 00:20:17.198 submitted_requests: 8876 00:20:17.198 queued_requests: 1 00:20:17.198 00:20:17.198 ==================== 00:20:17.198 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:20:17.198 TCP transport: 00:20:17.198 polls: 10404 00:20:17.198 idle_polls: 7396 00:20:17.199 sock_completions: 3008 00:20:17.199 nvme_completions: 5533 00:20:17.199 submitted_requests: 8329 00:20:17.199 queued_requests: 1 00:20:17.199 ======================================================== 00:20:17.199 Latency(us) 00:20:17.199 Device Information : IOPS MiB/s Average min max 00:20:17.199 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1523.89 380.97 85556.58 37736.41 141790.97 00:20:17.199 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1446.39 361.60 89660.79 34597.33 207740.50 00:20:17.199 ======================================================== 00:20:17.199 Total : 2970.28 742.57 87555.15 34597.33 207740.50 00:20:17.199 00:20:17.199 02:17:31 -- host/perf.sh@66 -- # sync 00:20:17.199 02:17:31 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:17.456 02:17:31 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:20:17.456 02:17:31 -- host/perf.sh@71 -- # '[' -n 0000:00:06.0 ']' 00:20:17.456 02:17:31 -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:20:17.714 02:17:32 -- host/perf.sh@72 -- # ls_guid=2f287c3d-300c-4fc7-9365-1a1f9ab29030 00:20:17.714 02:17:32 -- host/perf.sh@73 -- # get_lvs_free_mb 2f287c3d-300c-4fc7-9365-1a1f9ab29030 00:20:17.714 02:17:32 -- common/autotest_common.sh@1343 -- # local lvs_uuid=2f287c3d-300c-4fc7-9365-1a1f9ab29030 00:20:17.714 02:17:32 -- common/autotest_common.sh@1344 -- # local lvs_info 00:20:17.714 02:17:32 -- common/autotest_common.sh@1345 -- # local fc 00:20:17.714 02:17:32 -- common/autotest_common.sh@1346 -- # local cs 00:20:17.714 02:17:32 -- common/autotest_common.sh@1347 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:17.973 02:17:32 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:20:17.973 { 00:20:17.973 "base_bdev": "Nvme0n1", 00:20:17.973 "block_size": 4096, 00:20:17.973 "cluster_size": 4194304, 00:20:17.973 "free_clusters": 1278, 00:20:17.973 "name": "lvs_0", 00:20:17.973 "total_data_clusters": 1278, 00:20:17.973 "uuid": "2f287c3d-300c-4fc7-9365-1a1f9ab29030" 00:20:17.973 } 00:20:17.973 ]' 00:20:17.973 02:17:32 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="2f287c3d-300c-4fc7-9365-1a1f9ab29030") .free_clusters' 00:20:17.973 02:17:32 -- common/autotest_common.sh@1348 -- # fc=1278 00:20:17.973 02:17:32 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="2f287c3d-300c-4fc7-9365-1a1f9ab29030") .cluster_size' 00:20:18.232 02:17:32 -- common/autotest_common.sh@1349 -- # cs=4194304 00:20:18.232 02:17:32 -- common/autotest_common.sh@1352 -- # free_mb=5112 00:20:18.232 5112 00:20:18.232 02:17:32 -- common/autotest_common.sh@1353 -- # echo 5112 00:20:18.232 02:17:32 -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:20:18.232 02:17:32 -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2f287c3d-300c-4fc7-9365-1a1f9ab29030 lbd_0 5112 00:20:18.491 02:17:32 -- host/perf.sh@80 -- # lb_guid=09fc3a81-1250-4b5a-a434-d88186766351 00:20:18.491 02:17:32 -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 09fc3a81-1250-4b5a-a434-d88186766351 lvs_n_0 00:20:19.059 02:17:33 -- host/perf.sh@83 -- # ls_nested_guid=ce030522-e27c-40bf-863f-7fa4ed6b628c 00:20:19.059 02:17:33 -- host/perf.sh@84 -- # get_lvs_free_mb ce030522-e27c-40bf-863f-7fa4ed6b628c 00:20:19.059 02:17:33 -- common/autotest_common.sh@1343 -- # local lvs_uuid=ce030522-e27c-40bf-863f-7fa4ed6b628c 00:20:19.059 02:17:33 -- common/autotest_common.sh@1344 -- # local lvs_info 00:20:19.059 02:17:33 -- common/autotest_common.sh@1345 -- # local fc 00:20:19.059 02:17:33 -- common/autotest_common.sh@1346 -- # local cs 00:20:19.059 02:17:33 -- common/autotest_common.sh@1347 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:19.059 02:17:33 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:20:19.059 { 00:20:19.059 "base_bdev": "Nvme0n1", 00:20:19.059 "block_size": 4096, 00:20:19.059 "cluster_size": 4194304, 00:20:19.059 "free_clusters": 0, 00:20:19.059 "name": "lvs_0", 00:20:19.059 "total_data_clusters": 1278, 00:20:19.059 "uuid": "2f287c3d-300c-4fc7-9365-1a1f9ab29030" 00:20:19.059 }, 00:20:19.059 { 00:20:19.059 "base_bdev": "09fc3a81-1250-4b5a-a434-d88186766351", 00:20:19.059 "block_size": 4096, 00:20:19.059 "cluster_size": 4194304, 00:20:19.059 "free_clusters": 1276, 00:20:19.059 "name": "lvs_n_0", 00:20:19.059 "total_data_clusters": 1276, 00:20:19.059 "uuid": "ce030522-e27c-40bf-863f-7fa4ed6b628c" 00:20:19.059 } 00:20:19.059 ]' 00:20:19.059 02:17:33 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="ce030522-e27c-40bf-863f-7fa4ed6b628c") .free_clusters' 00:20:19.317 02:17:33 -- common/autotest_common.sh@1348 -- # fc=1276 00:20:19.317 02:17:33 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="ce030522-e27c-40bf-863f-7fa4ed6b628c") .cluster_size' 00:20:19.317 02:17:33 -- common/autotest_common.sh@1349 -- # cs=4194304 00:20:19.317 02:17:33 -- common/autotest_common.sh@1352 -- # free_mb=5104 00:20:19.317 5104 00:20:19.317 02:17:33 -- common/autotest_common.sh@1353 -- # echo 5104 00:20:19.317 02:17:33 -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:20:19.317 02:17:33 -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ce030522-e27c-40bf-863f-7fa4ed6b628c lbd_nest_0 5104 00:20:19.575 02:17:34 -- host/perf.sh@88 -- # lb_nested_guid=174e308f-54c4-4a89-a750-8f2c2084d836 00:20:19.575 02:17:34 -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:19.833 02:17:34 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:20:19.833 02:17:34 -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 174e308f-54c4-4a89-a750-8f2c2084d836 00:20:20.091 02:17:34 -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:20.349 02:17:34 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:20:20.349 02:17:34 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:20:20.349 02:17:34 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:20.349 02:17:34 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:20.349 02:17:34 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:20.638 No valid NVMe controllers or AIO or URING devices found 00:20:20.638 Initializing NVMe Controllers 00:20:20.638 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:20.638 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:20.638 WARNING: Some requested NVMe devices were skipped 00:20:20.638 02:17:35 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:20.638 02:17:35 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:32.832 Initializing NVMe Controllers 00:20:32.832 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:32.832 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:32.832 Initialization complete. Launching workers. 00:20:32.832 ======================================================== 00:20:32.832 Latency(us) 00:20:32.832 Device Information : IOPS MiB/s Average min max 00:20:32.832 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 978.00 122.25 1022.02 344.25 8400.94 00:20:32.832 ======================================================== 00:20:32.833 Total : 978.00 122.25 1022.02 344.25 8400.94 00:20:32.833 00:20:32.833 02:17:45 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:32.833 02:17:45 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:32.833 02:17:45 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:32.833 No valid NVMe controllers or AIO or URING devices found 00:20:32.833 Initializing NVMe Controllers 00:20:32.833 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:32.833 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:32.833 WARNING: Some requested NVMe devices were skipped 00:20:32.833 02:17:45 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:32.833 02:17:45 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:42.796 Initializing NVMe Controllers 00:20:42.796 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:42.796 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:42.796 Initialization complete. Launching workers. 00:20:42.796 ======================================================== 00:20:42.796 Latency(us) 00:20:42.796 Device Information : IOPS MiB/s Average min max 00:20:42.796 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1064.48 133.06 30074.18 7965.43 239715.41 00:20:42.796 ======================================================== 00:20:42.796 Total : 1064.48 133.06 30074.18 7965.43 239715.41 00:20:42.796 00:20:42.796 02:17:56 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:42.796 02:17:56 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:42.796 02:17:56 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:42.796 No valid NVMe controllers or AIO or URING devices found 00:20:42.796 Initializing NVMe Controllers 00:20:42.796 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:42.796 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:42.796 WARNING: Some requested NVMe devices were skipped 00:20:42.796 02:17:56 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:42.796 02:17:56 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:52.794 Initializing NVMe Controllers 00:20:52.794 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:52.794 Controller IO queue size 128, less than required. 00:20:52.794 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:52.794 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:52.794 Initialization complete. Launching workers. 00:20:52.794 ======================================================== 00:20:52.794 Latency(us) 00:20:52.795 Device Information : IOPS MiB/s Average min max 00:20:52.795 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3875.05 484.38 33044.17 12426.34 75546.77 00:20:52.795 ======================================================== 00:20:52.795 Total : 3875.05 484.38 33044.17 12426.34 75546.77 00:20:52.795 00:20:52.795 02:18:06 -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:52.795 02:18:06 -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 174e308f-54c4-4a89-a750-8f2c2084d836 00:20:52.795 02:18:07 -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:20:53.054 02:18:07 -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 09fc3a81-1250-4b5a-a434-d88186766351 00:20:53.312 02:18:07 -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:20:53.880 02:18:08 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:20:53.880 02:18:08 -- host/perf.sh@114 -- # nvmftestfini 00:20:53.880 02:18:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:53.880 02:18:08 -- nvmf/common.sh@116 -- # sync 00:20:53.880 02:18:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:53.880 02:18:08 -- nvmf/common.sh@119 -- # set +e 00:20:53.880 02:18:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:53.880 02:18:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:53.880 rmmod nvme_tcp 00:20:53.880 rmmod nvme_fabrics 00:20:53.880 rmmod nvme_keyring 00:20:53.880 02:18:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:53.880 02:18:08 -- nvmf/common.sh@123 -- # set -e 00:20:53.880 02:18:08 -- nvmf/common.sh@124 -- # return 0 00:20:53.880 02:18:08 -- nvmf/common.sh@477 -- # '[' -n 81080 ']' 00:20:53.880 02:18:08 -- nvmf/common.sh@478 -- # killprocess 81080 00:20:53.880 02:18:08 -- common/autotest_common.sh@926 -- # '[' -z 81080 ']' 00:20:53.880 02:18:08 -- common/autotest_common.sh@930 -- # kill -0 81080 00:20:53.880 02:18:08 -- common/autotest_common.sh@931 -- # uname 00:20:53.880 02:18:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:53.880 02:18:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 81080 00:20:53.880 killing process with pid 81080 00:20:53.880 02:18:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:53.880 02:18:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:53.880 02:18:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 81080' 00:20:53.880 02:18:08 -- common/autotest_common.sh@945 -- # kill 81080 00:20:53.880 02:18:08 -- common/autotest_common.sh@950 -- # wait 81080 00:20:55.258 02:18:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:55.258 02:18:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:55.258 02:18:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:55.258 02:18:09 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:55.258 02:18:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:55.258 02:18:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:55.258 02:18:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:55.258 02:18:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:55.518 02:18:09 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:55.518 ************************************ 00:20:55.518 END TEST nvmf_perf 00:20:55.518 ************************************ 00:20:55.518 00:20:55.518 real 0m51.498s 00:20:55.518 user 3m15.269s 00:20:55.518 sys 0m10.380s 00:20:55.518 02:18:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:55.518 02:18:09 -- common/autotest_common.sh@10 -- # set +x 00:20:55.518 02:18:09 -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:55.518 02:18:09 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:55.518 02:18:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:55.518 02:18:09 -- common/autotest_common.sh@10 -- # set +x 00:20:55.518 ************************************ 00:20:55.518 START TEST nvmf_fio_host 00:20:55.518 ************************************ 00:20:55.518 02:18:09 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:55.518 * Looking for test storage... 00:20:55.518 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:55.518 02:18:09 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:55.518 02:18:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:55.518 02:18:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:55.518 02:18:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:55.518 02:18:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.518 02:18:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.518 02:18:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.518 02:18:10 -- paths/export.sh@5 -- # export PATH 00:20:55.518 02:18:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.518 02:18:10 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:55.518 02:18:10 -- nvmf/common.sh@7 -- # uname -s 00:20:55.518 02:18:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:55.518 02:18:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:55.518 02:18:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:55.518 02:18:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:55.518 02:18:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:55.518 02:18:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:55.518 02:18:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:55.518 02:18:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:55.518 02:18:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:55.518 02:18:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:55.518 02:18:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:20:55.518 02:18:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:20:55.518 02:18:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:55.518 02:18:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:55.518 02:18:10 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:55.518 02:18:10 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:55.518 02:18:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:55.518 02:18:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:55.518 02:18:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:55.518 02:18:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.518 02:18:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.518 02:18:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.518 02:18:10 -- paths/export.sh@5 -- # export PATH 00:20:55.518 02:18:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.518 02:18:10 -- nvmf/common.sh@46 -- # : 0 00:20:55.518 02:18:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:55.518 02:18:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:55.518 02:18:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:55.518 02:18:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:55.518 02:18:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:55.518 02:18:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:55.518 02:18:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:55.518 02:18:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:55.518 02:18:10 -- host/fio.sh@12 -- # nvmftestinit 00:20:55.518 02:18:10 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:55.518 02:18:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:55.518 02:18:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:55.518 02:18:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:55.518 02:18:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:55.518 02:18:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:55.518 02:18:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:55.518 02:18:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:55.518 02:18:10 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:55.518 02:18:10 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:55.518 02:18:10 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:55.518 02:18:10 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:55.518 02:18:10 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:55.518 02:18:10 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:55.518 02:18:10 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:55.518 02:18:10 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:55.518 02:18:10 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:55.518 02:18:10 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:55.518 02:18:10 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:55.518 02:18:10 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:55.518 02:18:10 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:55.518 02:18:10 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:55.518 02:18:10 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:55.518 02:18:10 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:55.518 02:18:10 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:55.518 02:18:10 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:55.518 02:18:10 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:55.518 02:18:10 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:55.518 Cannot find device "nvmf_tgt_br" 00:20:55.518 02:18:10 -- nvmf/common.sh@154 -- # true 00:20:55.518 02:18:10 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:55.518 Cannot find device "nvmf_tgt_br2" 00:20:55.518 02:18:10 -- nvmf/common.sh@155 -- # true 00:20:55.518 02:18:10 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:55.518 02:18:10 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:55.518 Cannot find device "nvmf_tgt_br" 00:20:55.518 02:18:10 -- nvmf/common.sh@157 -- # true 00:20:55.518 02:18:10 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:55.518 Cannot find device "nvmf_tgt_br2" 00:20:55.518 02:18:10 -- nvmf/common.sh@158 -- # true 00:20:55.518 02:18:10 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:55.777 02:18:10 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:55.777 02:18:10 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:55.777 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:55.777 02:18:10 -- nvmf/common.sh@161 -- # true 00:20:55.777 02:18:10 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:55.777 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:55.777 02:18:10 -- nvmf/common.sh@162 -- # true 00:20:55.777 02:18:10 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:55.777 02:18:10 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:55.777 02:18:10 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:55.777 02:18:10 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:55.777 02:18:10 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:55.777 02:18:10 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:55.777 02:18:10 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:55.777 02:18:10 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:55.777 02:18:10 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:55.777 02:18:10 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:55.777 02:18:10 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:55.777 02:18:10 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:55.777 02:18:10 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:55.777 02:18:10 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:55.777 02:18:10 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:55.777 02:18:10 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:55.777 02:18:10 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:55.777 02:18:10 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:55.777 02:18:10 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:55.777 02:18:10 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:55.777 02:18:10 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:55.777 02:18:10 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:55.777 02:18:10 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:55.777 02:18:10 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:55.777 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:55.777 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:20:55.777 00:20:55.777 --- 10.0.0.2 ping statistics --- 00:20:55.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.777 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:20:55.777 02:18:10 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:55.777 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:55.777 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:20:55.777 00:20:55.777 --- 10.0.0.3 ping statistics --- 00:20:55.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.777 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:20:55.777 02:18:10 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:55.777 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:55.777 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:20:55.777 00:20:55.777 --- 10.0.0.1 ping statistics --- 00:20:55.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.777 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:20:55.777 02:18:10 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:55.777 02:18:10 -- nvmf/common.sh@421 -- # return 0 00:20:55.777 02:18:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:55.777 02:18:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:55.777 02:18:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:55.777 02:18:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:55.777 02:18:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:55.777 02:18:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:55.777 02:18:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:56.036 02:18:10 -- host/fio.sh@14 -- # [[ y != y ]] 00:20:56.036 02:18:10 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:20:56.036 02:18:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:56.036 02:18:10 -- common/autotest_common.sh@10 -- # set +x 00:20:56.036 02:18:10 -- host/fio.sh@22 -- # nvmfpid=82068 00:20:56.036 02:18:10 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:56.036 02:18:10 -- host/fio.sh@26 -- # waitforlisten 82068 00:20:56.036 02:18:10 -- common/autotest_common.sh@819 -- # '[' -z 82068 ']' 00:20:56.036 02:18:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:56.036 02:18:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:56.036 02:18:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:56.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:56.036 02:18:10 -- host/fio.sh@21 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:56.036 02:18:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:56.036 02:18:10 -- common/autotest_common.sh@10 -- # set +x 00:20:56.036 [2024-05-14 02:18:10.434664] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:20:56.036 [2024-05-14 02:18:10.434755] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:56.036 [2024-05-14 02:18:10.576741] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:56.295 [2024-05-14 02:18:10.645626] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:56.295 [2024-05-14 02:18:10.645800] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:56.295 [2024-05-14 02:18:10.645818] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:56.295 [2024-05-14 02:18:10.645829] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:56.295 [2024-05-14 02:18:10.645942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:56.295 [2024-05-14 02:18:10.645989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:56.295 [2024-05-14 02:18:10.646120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:56.295 [2024-05-14 02:18:10.646125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:56.886 02:18:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:56.886 02:18:11 -- common/autotest_common.sh@852 -- # return 0 00:20:56.886 02:18:11 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:56.886 02:18:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:56.886 02:18:11 -- common/autotest_common.sh@10 -- # set +x 00:20:56.886 [2024-05-14 02:18:11.432864] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:56.886 02:18:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:56.886 02:18:11 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:20:56.886 02:18:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:56.886 02:18:11 -- common/autotest_common.sh@10 -- # set +x 00:20:57.143 02:18:11 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:57.143 02:18:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:57.143 02:18:11 -- common/autotest_common.sh@10 -- # set +x 00:20:57.143 Malloc1 00:20:57.143 02:18:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:57.143 02:18:11 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:57.143 02:18:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:57.143 02:18:11 -- common/autotest_common.sh@10 -- # set +x 00:20:57.143 02:18:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:57.143 02:18:11 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:57.143 02:18:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:57.143 02:18:11 -- common/autotest_common.sh@10 -- # set +x 00:20:57.143 02:18:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:57.143 02:18:11 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:57.143 02:18:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:57.143 02:18:11 -- common/autotest_common.sh@10 -- # set +x 00:20:57.143 [2024-05-14 02:18:11.512642] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:57.143 02:18:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:57.143 02:18:11 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:57.143 02:18:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:57.143 02:18:11 -- common/autotest_common.sh@10 -- # set +x 00:20:57.143 02:18:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:57.143 02:18:11 -- host/fio.sh@36 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:20:57.143 02:18:11 -- host/fio.sh@39 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:57.143 02:18:11 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:57.143 02:18:11 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:20:57.143 02:18:11 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:57.143 02:18:11 -- common/autotest_common.sh@1318 -- # local sanitizers 00:20:57.143 02:18:11 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:57.143 02:18:11 -- common/autotest_common.sh@1320 -- # shift 00:20:57.143 02:18:11 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:20:57.143 02:18:11 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:20:57.143 02:18:11 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:57.143 02:18:11 -- common/autotest_common.sh@1324 -- # grep libasan 00:20:57.143 02:18:11 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:20:57.143 02:18:11 -- common/autotest_common.sh@1324 -- # asan_lib= 00:20:57.143 02:18:11 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:20:57.143 02:18:11 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:20:57.143 02:18:11 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:57.143 02:18:11 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:20:57.143 02:18:11 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:20:57.143 02:18:11 -- common/autotest_common.sh@1324 -- # asan_lib= 00:20:57.143 02:18:11 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:20:57.143 02:18:11 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:20:57.143 02:18:11 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:57.143 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:57.143 fio-3.35 00:20:57.143 Starting 1 thread 00:20:59.674 00:20:59.674 test: (groupid=0, jobs=1): err= 0: pid=82152: Tue May 14 02:18:13 2024 00:20:59.674 read: IOPS=9291, BW=36.3MiB/s (38.1MB/s)(72.8MiB/2007msec) 00:20:59.674 slat (usec): min=2, max=321, avg= 2.71, stdev= 3.40 00:20:59.674 clat (usec): min=3179, max=16068, avg=7311.02, stdev=765.31 00:20:59.674 lat (usec): min=3216, max=16071, avg=7313.73, stdev=765.22 00:20:59.674 clat percentiles (usec): 00:20:59.674 | 1.00th=[ 5669], 5.00th=[ 6390], 10.00th=[ 6587], 20.00th=[ 6849], 00:20:59.674 | 30.00th=[ 6980], 40.00th=[ 7111], 50.00th=[ 7242], 60.00th=[ 7373], 00:20:59.674 | 70.00th=[ 7570], 80.00th=[ 7767], 90.00th=[ 8094], 95.00th=[ 8356], 00:20:59.674 | 99.00th=[ 9241], 99.50th=[10421], 99.90th=[15139], 99.95th=[15401], 00:20:59.674 | 99.99th=[16057] 00:20:59.674 bw ( KiB/s): min=36680, max=37728, per=99.93%, avg=37141.50, stdev=487.80, samples=4 00:20:59.674 iops : min= 9170, max= 9432, avg=9285.25, stdev=122.06, samples=4 00:20:59.674 write: IOPS=9295, BW=36.3MiB/s (38.1MB/s)(72.9MiB/2007msec); 0 zone resets 00:20:59.674 slat (usec): min=2, max=298, avg= 2.81, stdev= 3.28 00:20:59.674 clat (usec): min=2398, max=12256, avg=6405.61, stdev=605.03 00:20:59.674 lat (usec): min=2411, max=12259, avg=6408.42, stdev=604.95 00:20:59.674 clat percentiles (usec): 00:20:59.674 | 1.00th=[ 4817], 5.00th=[ 5538], 10.00th=[ 5800], 20.00th=[ 5997], 00:20:59.674 | 30.00th=[ 6128], 40.00th=[ 6259], 50.00th=[ 6390], 60.00th=[ 6521], 00:20:59.674 | 70.00th=[ 6652], 80.00th=[ 6783], 90.00th=[ 7046], 95.00th=[ 7242], 00:20:59.674 | 99.00th=[ 7898], 99.50th=[ 8979], 99.90th=[10421], 99.95th=[11338], 00:20:59.674 | 99.99th=[12125] 00:20:59.674 bw ( KiB/s): min=36832, max=37640, per=99.98%, avg=37177.50, stdev=374.58, samples=4 00:20:59.674 iops : min= 9208, max= 9410, avg=9294.25, stdev=93.76, samples=4 00:20:59.674 lat (msec) : 4=0.15%, 10=99.41%, 20=0.44% 00:20:59.674 cpu : usr=62.51%, sys=26.17%, ctx=51, majf=0, minf=6 00:20:59.674 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:20:59.674 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.674 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:59.674 issued rwts: total=18648,18657,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:59.674 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:59.674 00:20:59.674 Run status group 0 (all jobs): 00:20:59.674 READ: bw=36.3MiB/s (38.1MB/s), 36.3MiB/s-36.3MiB/s (38.1MB/s-38.1MB/s), io=72.8MiB (76.4MB), run=2007-2007msec 00:20:59.674 WRITE: bw=36.3MiB/s (38.1MB/s), 36.3MiB/s-36.3MiB/s (38.1MB/s-38.1MB/s), io=72.9MiB (76.4MB), run=2007-2007msec 00:20:59.674 02:18:14 -- host/fio.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:59.674 02:18:14 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:59.674 02:18:14 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:20:59.674 02:18:14 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:59.674 02:18:14 -- common/autotest_common.sh@1318 -- # local sanitizers 00:20:59.674 02:18:14 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:59.674 02:18:14 -- common/autotest_common.sh@1320 -- # shift 00:20:59.674 02:18:14 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:20:59.674 02:18:14 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:20:59.674 02:18:14 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:59.674 02:18:14 -- common/autotest_common.sh@1324 -- # grep libasan 00:20:59.674 02:18:14 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:20:59.674 02:18:14 -- common/autotest_common.sh@1324 -- # asan_lib= 00:20:59.674 02:18:14 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:20:59.674 02:18:14 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:20:59.674 02:18:14 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:59.674 02:18:14 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:20:59.674 02:18:14 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:20:59.674 02:18:14 -- common/autotest_common.sh@1324 -- # asan_lib= 00:20:59.674 02:18:14 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:20:59.674 02:18:14 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:20:59.674 02:18:14 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:59.674 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:20:59.674 fio-3.35 00:20:59.674 Starting 1 thread 00:21:02.206 00:21:02.206 test: (groupid=0, jobs=1): err= 0: pid=82200: Tue May 14 02:18:16 2024 00:21:02.206 read: IOPS=8342, BW=130MiB/s (137MB/s)(262MiB/2006msec) 00:21:02.206 slat (usec): min=3, max=121, avg= 3.79, stdev= 1.83 00:21:02.206 clat (usec): min=2315, max=18287, avg=8941.15, stdev=2007.74 00:21:02.206 lat (usec): min=2321, max=18291, avg=8944.94, stdev=2007.78 00:21:02.206 clat percentiles (usec): 00:21:02.206 | 1.00th=[ 4883], 5.00th=[ 5866], 10.00th=[ 6390], 20.00th=[ 7111], 00:21:02.206 | 30.00th=[ 7767], 40.00th=[ 8356], 50.00th=[ 8848], 60.00th=[ 9372], 00:21:02.206 | 70.00th=[10159], 80.00th=[10814], 90.00th=[11469], 95.00th=[11731], 00:21:02.206 | 99.00th=[13960], 99.50th=[15008], 99.90th=[17171], 99.95th=[17433], 00:21:02.206 | 99.99th=[17695] 00:21:02.206 bw ( KiB/s): min=60128, max=80768, per=52.53%, avg=70120.00, stdev=10588.77, samples=4 00:21:02.206 iops : min= 3758, max= 5048, avg=4382.50, stdev=661.80, samples=4 00:21:02.206 write: IOPS=5046, BW=78.8MiB/s (82.7MB/s)(143MiB/1817msec); 0 zone resets 00:21:02.206 slat (usec): min=37, max=370, avg=39.24, stdev= 6.69 00:21:02.206 clat (usec): min=4225, max=19810, avg=11012.42, stdev=1912.15 00:21:02.206 lat (usec): min=4263, max=19848, avg=11051.66, stdev=1911.98 00:21:02.206 clat percentiles (usec): 00:21:02.206 | 1.00th=[ 7439], 5.00th=[ 8356], 10.00th=[ 8848], 20.00th=[ 9503], 00:21:02.206 | 30.00th=[ 9896], 40.00th=[10421], 50.00th=[10814], 60.00th=[11207], 00:21:02.206 | 70.00th=[11731], 80.00th=[12387], 90.00th=[13435], 95.00th=[14484], 00:21:02.206 | 99.00th=[16909], 99.50th=[18482], 99.90th=[19530], 99.95th=[19792], 00:21:02.206 | 99.99th=[19792] 00:21:02.206 bw ( KiB/s): min=61184, max=83040, per=90.17%, avg=72800.00, stdev=10808.83, samples=4 00:21:02.206 iops : min= 3824, max= 5190, avg=4550.00, stdev=675.55, samples=4 00:21:02.206 lat (msec) : 4=0.17%, 10=55.09%, 20=44.74% 00:21:02.206 cpu : usr=73.37%, sys=16.91%, ctx=44, majf=0, minf=21 00:21:02.206 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:21:02.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:02.206 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:02.206 issued rwts: total=16736,9169,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:02.206 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:02.206 00:21:02.206 Run status group 0 (all jobs): 00:21:02.206 READ: bw=130MiB/s (137MB/s), 130MiB/s-130MiB/s (137MB/s-137MB/s), io=262MiB (274MB), run=2006-2006msec 00:21:02.206 WRITE: bw=78.8MiB/s (82.7MB/s), 78.8MiB/s-78.8MiB/s (82.7MB/s-82.7MB/s), io=143MiB (150MB), run=1817-1817msec 00:21:02.206 02:18:16 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:02.206 02:18:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:02.206 02:18:16 -- common/autotest_common.sh@10 -- # set +x 00:21:02.206 02:18:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:02.206 02:18:16 -- host/fio.sh@47 -- # '[' 1 -eq 1 ']' 00:21:02.206 02:18:16 -- host/fio.sh@49 -- # bdfs=($(get_nvme_bdfs)) 00:21:02.206 02:18:16 -- host/fio.sh@49 -- # get_nvme_bdfs 00:21:02.206 02:18:16 -- common/autotest_common.sh@1498 -- # bdfs=() 00:21:02.206 02:18:16 -- common/autotest_common.sh@1498 -- # local bdfs 00:21:02.206 02:18:16 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:21:02.206 02:18:16 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:21:02.206 02:18:16 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:21:02.206 02:18:16 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:21:02.206 02:18:16 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:21:02.206 02:18:16 -- host/fio.sh@50 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 -i 10.0.0.2 00:21:02.206 02:18:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:02.206 02:18:16 -- common/autotest_common.sh@10 -- # set +x 00:21:02.206 Nvme0n1 00:21:02.206 02:18:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:02.206 02:18:16 -- host/fio.sh@51 -- # rpc_cmd bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:21:02.206 02:18:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:02.206 02:18:16 -- common/autotest_common.sh@10 -- # set +x 00:21:02.206 02:18:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:02.206 02:18:16 -- host/fio.sh@51 -- # ls_guid=5603f6c8-7715-482c-8af4-450f843ce43d 00:21:02.206 02:18:16 -- host/fio.sh@52 -- # get_lvs_free_mb 5603f6c8-7715-482c-8af4-450f843ce43d 00:21:02.206 02:18:16 -- common/autotest_common.sh@1343 -- # local lvs_uuid=5603f6c8-7715-482c-8af4-450f843ce43d 00:21:02.206 02:18:16 -- common/autotest_common.sh@1344 -- # local lvs_info 00:21:02.206 02:18:16 -- common/autotest_common.sh@1345 -- # local fc 00:21:02.206 02:18:16 -- common/autotest_common.sh@1346 -- # local cs 00:21:02.206 02:18:16 -- common/autotest_common.sh@1347 -- # rpc_cmd bdev_lvol_get_lvstores 00:21:02.206 02:18:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:02.206 02:18:16 -- common/autotest_common.sh@10 -- # set +x 00:21:02.206 02:18:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:02.206 02:18:16 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:21:02.206 { 00:21:02.206 "base_bdev": "Nvme0n1", 00:21:02.206 "block_size": 4096, 00:21:02.206 "cluster_size": 1073741824, 00:21:02.206 "free_clusters": 4, 00:21:02.206 "name": "lvs_0", 00:21:02.206 "total_data_clusters": 4, 00:21:02.206 "uuid": "5603f6c8-7715-482c-8af4-450f843ce43d" 00:21:02.206 } 00:21:02.206 ]' 00:21:02.206 02:18:16 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="5603f6c8-7715-482c-8af4-450f843ce43d") .free_clusters' 00:21:02.206 02:18:16 -- common/autotest_common.sh@1348 -- # fc=4 00:21:02.206 02:18:16 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="5603f6c8-7715-482c-8af4-450f843ce43d") .cluster_size' 00:21:02.206 02:18:16 -- common/autotest_common.sh@1349 -- # cs=1073741824 00:21:02.206 4096 00:21:02.206 02:18:16 -- common/autotest_common.sh@1352 -- # free_mb=4096 00:21:02.206 02:18:16 -- common/autotest_common.sh@1353 -- # echo 4096 00:21:02.206 02:18:16 -- host/fio.sh@53 -- # rpc_cmd bdev_lvol_create -l lvs_0 lbd_0 4096 00:21:02.206 02:18:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:02.206 02:18:16 -- common/autotest_common.sh@10 -- # set +x 00:21:02.206 d85dfa9b-a635-4676-bfd5-7dc32e499604 00:21:02.206 02:18:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:02.206 02:18:16 -- host/fio.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:21:02.206 02:18:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:02.206 02:18:16 -- common/autotest_common.sh@10 -- # set +x 00:21:02.206 02:18:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:02.206 02:18:16 -- host/fio.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:21:02.206 02:18:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:02.206 02:18:16 -- common/autotest_common.sh@10 -- # set +x 00:21:02.206 02:18:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:02.206 02:18:16 -- host/fio.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:02.206 02:18:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:02.206 02:18:16 -- common/autotest_common.sh@10 -- # set +x 00:21:02.206 02:18:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:02.206 02:18:16 -- host/fio.sh@57 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:02.206 02:18:16 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:02.206 02:18:16 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:21:02.206 02:18:16 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:02.206 02:18:16 -- common/autotest_common.sh@1318 -- # local sanitizers 00:21:02.206 02:18:16 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:02.206 02:18:16 -- common/autotest_common.sh@1320 -- # shift 00:21:02.206 02:18:16 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:21:02.206 02:18:16 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:21:02.206 02:18:16 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:02.206 02:18:16 -- common/autotest_common.sh@1324 -- # grep libasan 00:21:02.206 02:18:16 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:21:02.206 02:18:16 -- common/autotest_common.sh@1324 -- # asan_lib= 00:21:02.206 02:18:16 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:21:02.206 02:18:16 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:21:02.206 02:18:16 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:02.207 02:18:16 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:21:02.207 02:18:16 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:21:02.207 02:18:16 -- common/autotest_common.sh@1324 -- # asan_lib= 00:21:02.207 02:18:16 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:21:02.207 02:18:16 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:02.207 02:18:16 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:02.466 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:02.466 fio-3.35 00:21:02.466 Starting 1 thread 00:21:04.999 00:21:04.999 test: (groupid=0, jobs=1): err= 0: pid=82283: Tue May 14 02:18:19 2024 00:21:04.999 read: IOPS=6681, BW=26.1MiB/s (27.4MB/s)(52.4MiB/2008msec) 00:21:04.999 slat (usec): min=2, max=334, avg= 2.61, stdev= 3.61 00:21:04.999 clat (usec): min=3833, max=16091, avg=10184.78, stdev=946.50 00:21:04.999 lat (usec): min=3843, max=16094, avg=10187.39, stdev=946.27 00:21:04.999 clat percentiles (usec): 00:21:04.999 | 1.00th=[ 8094], 5.00th=[ 8717], 10.00th=[ 9110], 20.00th=[ 9372], 00:21:04.999 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[10421], 00:21:04.999 | 70.00th=[10683], 80.00th=[10945], 90.00th=[11338], 95.00th=[11731], 00:21:04.999 | 99.00th=[12387], 99.50th=[12649], 99.90th=[14353], 99.95th=[15664], 00:21:04.999 | 99.99th=[15926] 00:21:04.999 bw ( KiB/s): min=25968, max=27160, per=99.91%, avg=26702.00, stdev=558.14, samples=4 00:21:04.999 iops : min= 6492, max= 6790, avg=6675.50, stdev=139.54, samples=4 00:21:04.999 write: IOPS=6685, BW=26.1MiB/s (27.4MB/s)(52.4MiB/2008msec); 0 zone resets 00:21:04.999 slat (usec): min=2, max=127, avg= 2.70, stdev= 1.59 00:21:04.999 clat (usec): min=2063, max=16115, avg=8883.17, stdev=830.09 00:21:04.999 lat (usec): min=2076, max=16118, avg=8885.87, stdev=829.91 00:21:04.999 clat percentiles (usec): 00:21:04.999 | 1.00th=[ 7046], 5.00th=[ 7635], 10.00th=[ 7898], 20.00th=[ 8225], 00:21:04.999 | 30.00th=[ 8455], 40.00th=[ 8717], 50.00th=[ 8848], 60.00th=[ 9110], 00:21:04.999 | 70.00th=[ 9241], 80.00th=[ 9503], 90.00th=[ 9896], 95.00th=[10159], 00:21:04.999 | 99.00th=[10683], 99.50th=[11076], 99.90th=[14615], 99.95th=[15533], 00:21:04.999 | 99.99th=[16057] 00:21:04.999 bw ( KiB/s): min=26512, max=26952, per=99.94%, avg=26726.00, stdev=181.65, samples=4 00:21:04.999 iops : min= 6628, max= 6738, avg=6681.50, stdev=45.41, samples=4 00:21:04.999 lat (msec) : 4=0.06%, 10=68.33%, 20=31.60% 00:21:04.999 cpu : usr=70.95%, sys=22.02%, ctx=491, majf=0, minf=25 00:21:04.999 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:04.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:04.999 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:04.999 issued rwts: total=13416,13425,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:04.999 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:04.999 00:21:04.999 Run status group 0 (all jobs): 00:21:04.999 READ: bw=26.1MiB/s (27.4MB/s), 26.1MiB/s-26.1MiB/s (27.4MB/s-27.4MB/s), io=52.4MiB (55.0MB), run=2008-2008msec 00:21:04.999 WRITE: bw=26.1MiB/s (27.4MB/s), 26.1MiB/s-26.1MiB/s (27.4MB/s-27.4MB/s), io=52.4MiB (55.0MB), run=2008-2008msec 00:21:04.999 02:18:19 -- host/fio.sh@59 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:04.999 02:18:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:04.999 02:18:19 -- common/autotest_common.sh@10 -- # set +x 00:21:04.999 02:18:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:04.999 02:18:19 -- host/fio.sh@62 -- # rpc_cmd bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:21:04.999 02:18:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:04.999 02:18:19 -- common/autotest_common.sh@10 -- # set +x 00:21:04.999 02:18:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:04.999 02:18:19 -- host/fio.sh@62 -- # ls_nested_guid=9b00e9c7-d619-429d-968b-6e44aada14da 00:21:04.999 02:18:19 -- host/fio.sh@63 -- # get_lvs_free_mb 9b00e9c7-d619-429d-968b-6e44aada14da 00:21:04.999 02:18:19 -- common/autotest_common.sh@1343 -- # local lvs_uuid=9b00e9c7-d619-429d-968b-6e44aada14da 00:21:04.999 02:18:19 -- common/autotest_common.sh@1344 -- # local lvs_info 00:21:04.999 02:18:19 -- common/autotest_common.sh@1345 -- # local fc 00:21:04.999 02:18:19 -- common/autotest_common.sh@1346 -- # local cs 00:21:04.999 02:18:19 -- common/autotest_common.sh@1347 -- # rpc_cmd bdev_lvol_get_lvstores 00:21:04.999 02:18:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:04.999 02:18:19 -- common/autotest_common.sh@10 -- # set +x 00:21:04.999 02:18:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:04.999 02:18:19 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:21:04.999 { 00:21:04.999 "base_bdev": "Nvme0n1", 00:21:04.999 "block_size": 4096, 00:21:04.999 "cluster_size": 1073741824, 00:21:04.999 "free_clusters": 0, 00:21:04.999 "name": "lvs_0", 00:21:04.999 "total_data_clusters": 4, 00:21:04.999 "uuid": "5603f6c8-7715-482c-8af4-450f843ce43d" 00:21:04.999 }, 00:21:04.999 { 00:21:04.999 "base_bdev": "d85dfa9b-a635-4676-bfd5-7dc32e499604", 00:21:04.999 "block_size": 4096, 00:21:04.999 "cluster_size": 4194304, 00:21:04.999 "free_clusters": 1022, 00:21:04.999 "name": "lvs_n_0", 00:21:04.999 "total_data_clusters": 1022, 00:21:04.999 "uuid": "9b00e9c7-d619-429d-968b-6e44aada14da" 00:21:04.999 } 00:21:04.999 ]' 00:21:04.999 02:18:19 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="9b00e9c7-d619-429d-968b-6e44aada14da") .free_clusters' 00:21:04.999 02:18:19 -- common/autotest_common.sh@1348 -- # fc=1022 00:21:04.999 02:18:19 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="9b00e9c7-d619-429d-968b-6e44aada14da") .cluster_size' 00:21:04.999 02:18:19 -- common/autotest_common.sh@1349 -- # cs=4194304 00:21:04.999 4088 00:21:04.999 02:18:19 -- common/autotest_common.sh@1352 -- # free_mb=4088 00:21:04.999 02:18:19 -- common/autotest_common.sh@1353 -- # echo 4088 00:21:04.999 02:18:19 -- host/fio.sh@64 -- # rpc_cmd bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:21:04.999 02:18:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:04.999 02:18:19 -- common/autotest_common.sh@10 -- # set +x 00:21:04.999 3d8d700b-235c-4970-8881-5e8de92478ef 00:21:04.999 02:18:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:04.999 02:18:19 -- host/fio.sh@65 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:21:04.999 02:18:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:04.999 02:18:19 -- common/autotest_common.sh@10 -- # set +x 00:21:04.999 02:18:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:04.999 02:18:19 -- host/fio.sh@66 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:21:04.999 02:18:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:04.999 02:18:19 -- common/autotest_common.sh@10 -- # set +x 00:21:04.999 02:18:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:04.999 02:18:19 -- host/fio.sh@67 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:21:04.999 02:18:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:04.999 02:18:19 -- common/autotest_common.sh@10 -- # set +x 00:21:04.999 02:18:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:04.999 02:18:19 -- host/fio.sh@68 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:04.999 02:18:19 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:04.999 02:18:19 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:21:04.999 02:18:19 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:04.999 02:18:19 -- common/autotest_common.sh@1318 -- # local sanitizers 00:21:04.999 02:18:19 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:04.999 02:18:19 -- common/autotest_common.sh@1320 -- # shift 00:21:04.999 02:18:19 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:21:04.999 02:18:19 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:21:04.999 02:18:19 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:04.999 02:18:19 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:21:04.999 02:18:19 -- common/autotest_common.sh@1324 -- # grep libasan 00:21:04.999 02:18:19 -- common/autotest_common.sh@1324 -- # asan_lib= 00:21:04.999 02:18:19 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:21:04.999 02:18:19 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:21:04.999 02:18:19 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:04.999 02:18:19 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:21:04.999 02:18:19 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:21:04.999 02:18:19 -- common/autotest_common.sh@1324 -- # asan_lib= 00:21:04.999 02:18:19 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:21:04.999 02:18:19 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:04.999 02:18:19 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:04.999 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:04.999 fio-3.35 00:21:04.999 Starting 1 thread 00:21:07.582 00:21:07.582 test: (groupid=0, jobs=1): err= 0: pid=82333: Tue May 14 02:18:21 2024 00:21:07.582 read: IOPS=5892, BW=23.0MiB/s (24.1MB/s)(46.2MiB/2009msec) 00:21:07.582 slat (usec): min=2, max=340, avg= 2.65, stdev= 3.85 00:21:07.582 clat (usec): min=4318, max=19733, avg=11546.23, stdev=1068.43 00:21:07.582 lat (usec): min=4328, max=19736, avg=11548.88, stdev=1068.18 00:21:07.582 clat percentiles (usec): 00:21:07.582 | 1.00th=[ 9110], 5.00th=[10028], 10.00th=[10290], 20.00th=[10683], 00:21:07.582 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11469], 60.00th=[11731], 00:21:07.582 | 70.00th=[11994], 80.00th=[12387], 90.00th=[12911], 95.00th=[13304], 00:21:07.582 | 99.00th=[13960], 99.50th=[14353], 99.90th=[18220], 99.95th=[19268], 00:21:07.582 | 99.99th=[19530] 00:21:07.582 bw ( KiB/s): min=22720, max=23912, per=99.97%, avg=23564.00, stdev=566.76, samples=4 00:21:07.582 iops : min= 5680, max= 5978, avg=5891.00, stdev=141.69, samples=4 00:21:07.582 write: IOPS=5892, BW=23.0MiB/s (24.1MB/s)(46.2MiB/2009msec); 0 zone resets 00:21:07.582 slat (usec): min=2, max=246, avg= 2.73, stdev= 2.50 00:21:07.582 clat (usec): min=2406, max=19787, avg=10084.25, stdev=945.02 00:21:07.582 lat (usec): min=2419, max=19790, avg=10086.98, stdev=944.86 00:21:07.582 clat percentiles (usec): 00:21:07.582 | 1.00th=[ 8029], 5.00th=[ 8717], 10.00th=[ 8979], 20.00th=[ 9372], 00:21:07.582 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[10290], 00:21:07.582 | 70.00th=[10552], 80.00th=[10814], 90.00th=[11207], 95.00th=[11469], 00:21:07.582 | 99.00th=[11994], 99.50th=[12387], 99.90th=[17171], 99.95th=[18482], 00:21:07.582 | 99.99th=[18482] 00:21:07.582 bw ( KiB/s): min=23488, max=23560, per=99.86%, avg=23536.00, stdev=32.66, samples=4 00:21:07.582 iops : min= 5872, max= 5890, avg=5884.00, stdev= 8.16, samples=4 00:21:07.582 lat (msec) : 4=0.04%, 10=25.77%, 20=74.19% 00:21:07.582 cpu : usr=70.62%, sys=23.51%, ctx=25, majf=0, minf=25 00:21:07.582 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:21:07.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.582 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:07.582 issued rwts: total=11839,11838,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:07.582 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:07.582 00:21:07.582 Run status group 0 (all jobs): 00:21:07.582 READ: bw=23.0MiB/s (24.1MB/s), 23.0MiB/s-23.0MiB/s (24.1MB/s-24.1MB/s), io=46.2MiB (48.5MB), run=2009-2009msec 00:21:07.582 WRITE: bw=23.0MiB/s (24.1MB/s), 23.0MiB/s-23.0MiB/s (24.1MB/s-24.1MB/s), io=46.2MiB (48.5MB), run=2009-2009msec 00:21:07.582 02:18:21 -- host/fio.sh@70 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:21:07.582 02:18:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:07.582 02:18:21 -- common/autotest_common.sh@10 -- # set +x 00:21:07.582 02:18:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:07.582 02:18:21 -- host/fio.sh@72 -- # sync 00:21:07.582 02:18:21 -- host/fio.sh@74 -- # rpc_cmd bdev_lvol_delete lvs_n_0/lbd_nest_0 00:21:07.582 02:18:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:07.582 02:18:21 -- common/autotest_common.sh@10 -- # set +x 00:21:07.582 02:18:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:07.582 02:18:21 -- host/fio.sh@75 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_n_0 00:21:07.582 02:18:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:07.582 02:18:21 -- common/autotest_common.sh@10 -- # set +x 00:21:07.582 02:18:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:07.582 02:18:21 -- host/fio.sh@76 -- # rpc_cmd bdev_lvol_delete lvs_0/lbd_0 00:21:07.582 02:18:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:07.582 02:18:21 -- common/autotest_common.sh@10 -- # set +x 00:21:07.582 02:18:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:07.582 02:18:21 -- host/fio.sh@77 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_0 00:21:07.582 02:18:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:07.582 02:18:21 -- common/autotest_common.sh@10 -- # set +x 00:21:07.582 02:18:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:07.582 02:18:21 -- host/fio.sh@78 -- # rpc_cmd bdev_nvme_detach_controller Nvme0 00:21:07.582 02:18:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:07.582 02:18:21 -- common/autotest_common.sh@10 -- # set +x 00:21:08.149 02:18:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:08.149 02:18:22 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:21:08.149 02:18:22 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:21:08.149 02:18:22 -- host/fio.sh@84 -- # nvmftestfini 00:21:08.149 02:18:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:08.149 02:18:22 -- nvmf/common.sh@116 -- # sync 00:21:08.149 02:18:22 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:08.149 02:18:22 -- nvmf/common.sh@119 -- # set +e 00:21:08.149 02:18:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:08.149 02:18:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:08.149 rmmod nvme_tcp 00:21:08.149 rmmod nvme_fabrics 00:21:08.149 rmmod nvme_keyring 00:21:08.149 02:18:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:08.149 02:18:22 -- nvmf/common.sh@123 -- # set -e 00:21:08.149 02:18:22 -- nvmf/common.sh@124 -- # return 0 00:21:08.149 02:18:22 -- nvmf/common.sh@477 -- # '[' -n 82068 ']' 00:21:08.149 02:18:22 -- nvmf/common.sh@478 -- # killprocess 82068 00:21:08.149 02:18:22 -- common/autotest_common.sh@926 -- # '[' -z 82068 ']' 00:21:08.149 02:18:22 -- common/autotest_common.sh@930 -- # kill -0 82068 00:21:08.149 02:18:22 -- common/autotest_common.sh@931 -- # uname 00:21:08.408 02:18:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:08.408 02:18:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 82068 00:21:08.408 02:18:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:08.408 killing process with pid 82068 00:21:08.409 02:18:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:08.409 02:18:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 82068' 00:21:08.409 02:18:22 -- common/autotest_common.sh@945 -- # kill 82068 00:21:08.409 02:18:22 -- common/autotest_common.sh@950 -- # wait 82068 00:21:08.409 02:18:22 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:08.409 02:18:22 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:08.409 02:18:22 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:08.409 02:18:22 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:08.409 02:18:22 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:08.409 02:18:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:08.409 02:18:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:08.409 02:18:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:08.409 02:18:22 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:08.668 00:21:08.668 real 0m13.075s 00:21:08.668 user 0m54.853s 00:21:08.668 sys 0m3.278s 00:21:08.668 02:18:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:08.668 ************************************ 00:21:08.668 END TEST nvmf_fio_host 00:21:08.668 02:18:23 -- common/autotest_common.sh@10 -- # set +x 00:21:08.668 ************************************ 00:21:08.668 02:18:23 -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:08.668 02:18:23 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:08.668 02:18:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:08.668 02:18:23 -- common/autotest_common.sh@10 -- # set +x 00:21:08.668 ************************************ 00:21:08.668 START TEST nvmf_failover 00:21:08.668 ************************************ 00:21:08.668 02:18:23 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:08.668 * Looking for test storage... 00:21:08.668 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:08.668 02:18:23 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:08.668 02:18:23 -- nvmf/common.sh@7 -- # uname -s 00:21:08.668 02:18:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:08.668 02:18:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:08.668 02:18:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:08.668 02:18:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:08.668 02:18:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:08.668 02:18:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:08.668 02:18:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:08.668 02:18:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:08.668 02:18:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:08.668 02:18:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:08.668 02:18:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:21:08.668 02:18:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:21:08.668 02:18:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:08.668 02:18:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:08.668 02:18:23 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:08.668 02:18:23 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:08.668 02:18:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:08.668 02:18:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:08.668 02:18:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:08.668 02:18:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.668 02:18:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.668 02:18:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.668 02:18:23 -- paths/export.sh@5 -- # export PATH 00:21:08.668 02:18:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.668 02:18:23 -- nvmf/common.sh@46 -- # : 0 00:21:08.668 02:18:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:08.668 02:18:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:08.668 02:18:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:08.668 02:18:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:08.668 02:18:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:08.668 02:18:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:08.668 02:18:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:08.668 02:18:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:08.668 02:18:23 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:08.668 02:18:23 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:08.668 02:18:23 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:08.668 02:18:23 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:08.668 02:18:23 -- host/failover.sh@18 -- # nvmftestinit 00:21:08.668 02:18:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:08.668 02:18:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:08.668 02:18:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:08.668 02:18:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:08.668 02:18:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:08.668 02:18:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:08.668 02:18:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:08.668 02:18:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:08.668 02:18:23 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:08.668 02:18:23 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:08.668 02:18:23 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:08.668 02:18:23 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:08.668 02:18:23 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:08.668 02:18:23 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:08.668 02:18:23 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:08.668 02:18:23 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:08.668 02:18:23 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:08.668 02:18:23 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:08.668 02:18:23 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:08.668 02:18:23 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:08.668 02:18:23 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:08.668 02:18:23 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:08.668 02:18:23 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:08.668 02:18:23 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:08.668 02:18:23 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:08.668 02:18:23 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:08.668 02:18:23 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:08.668 02:18:23 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:08.668 Cannot find device "nvmf_tgt_br" 00:21:08.668 02:18:23 -- nvmf/common.sh@154 -- # true 00:21:08.668 02:18:23 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:08.668 Cannot find device "nvmf_tgt_br2" 00:21:08.668 02:18:23 -- nvmf/common.sh@155 -- # true 00:21:08.668 02:18:23 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:08.668 02:18:23 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:08.668 Cannot find device "nvmf_tgt_br" 00:21:08.668 02:18:23 -- nvmf/common.sh@157 -- # true 00:21:08.668 02:18:23 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:08.668 Cannot find device "nvmf_tgt_br2" 00:21:08.668 02:18:23 -- nvmf/common.sh@158 -- # true 00:21:08.669 02:18:23 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:08.927 02:18:23 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:08.927 02:18:23 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:08.927 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:08.927 02:18:23 -- nvmf/common.sh@161 -- # true 00:21:08.927 02:18:23 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:08.927 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:08.927 02:18:23 -- nvmf/common.sh@162 -- # true 00:21:08.927 02:18:23 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:08.927 02:18:23 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:08.927 02:18:23 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:08.927 02:18:23 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:08.927 02:18:23 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:08.927 02:18:23 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:08.927 02:18:23 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:08.927 02:18:23 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:08.927 02:18:23 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:08.927 02:18:23 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:08.927 02:18:23 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:08.927 02:18:23 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:08.927 02:18:23 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:08.927 02:18:23 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:08.927 02:18:23 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:08.927 02:18:23 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:08.927 02:18:23 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:08.927 02:18:23 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:08.927 02:18:23 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:08.927 02:18:23 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:08.927 02:18:23 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:08.927 02:18:23 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:08.927 02:18:23 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:08.927 02:18:23 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:08.927 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:08.927 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:21:08.927 00:21:08.927 --- 10.0.0.2 ping statistics --- 00:21:08.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:08.927 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:21:08.927 02:18:23 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:08.927 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:08.927 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:21:08.927 00:21:08.927 --- 10.0.0.3 ping statistics --- 00:21:08.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:08.927 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:21:08.927 02:18:23 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:08.927 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:08.927 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:21:08.927 00:21:08.927 --- 10.0.0.1 ping statistics --- 00:21:08.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:08.927 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:21:08.927 02:18:23 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:08.927 02:18:23 -- nvmf/common.sh@421 -- # return 0 00:21:08.927 02:18:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:08.927 02:18:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:08.927 02:18:23 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:08.928 02:18:23 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:08.928 02:18:23 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:08.928 02:18:23 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:08.928 02:18:23 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:08.928 02:18:23 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:21:08.928 02:18:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:08.928 02:18:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:08.928 02:18:23 -- common/autotest_common.sh@10 -- # set +x 00:21:08.928 02:18:23 -- nvmf/common.sh@469 -- # nvmfpid=82554 00:21:08.928 02:18:23 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:08.928 02:18:23 -- nvmf/common.sh@470 -- # waitforlisten 82554 00:21:08.928 02:18:23 -- common/autotest_common.sh@819 -- # '[' -z 82554 ']' 00:21:08.928 02:18:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:08.928 02:18:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:08.928 02:18:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:08.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:08.928 02:18:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:08.928 02:18:23 -- common/autotest_common.sh@10 -- # set +x 00:21:09.185 [2024-05-14 02:18:23.534219] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:09.185 [2024-05-14 02:18:23.534328] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:09.185 [2024-05-14 02:18:23.674828] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:09.185 [2024-05-14 02:18:23.736755] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:09.185 [2024-05-14 02:18:23.736912] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:09.185 [2024-05-14 02:18:23.736942] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:09.185 [2024-05-14 02:18:23.736951] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:09.185 [2024-05-14 02:18:23.737020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:09.185 [2024-05-14 02:18:23.737079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:09.185 [2024-05-14 02:18:23.737086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:10.119 02:18:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:10.119 02:18:24 -- common/autotest_common.sh@852 -- # return 0 00:21:10.119 02:18:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:10.119 02:18:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:10.119 02:18:24 -- common/autotest_common.sh@10 -- # set +x 00:21:10.119 02:18:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:10.119 02:18:24 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:10.377 [2024-05-14 02:18:24.767834] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:10.377 02:18:24 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:10.636 Malloc0 00:21:10.636 02:18:25 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:10.894 02:18:25 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:11.153 02:18:25 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:11.412 [2024-05-14 02:18:25.845511] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:11.412 02:18:25 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:11.671 [2024-05-14 02:18:26.121822] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:11.671 02:18:26 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:11.930 [2024-05-14 02:18:26.414110] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:11.930 02:18:26 -- host/failover.sh@31 -- # bdevperf_pid=82666 00:21:11.930 02:18:26 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:21:11.930 02:18:26 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:11.930 02:18:26 -- host/failover.sh@34 -- # waitforlisten 82666 /var/tmp/bdevperf.sock 00:21:11.930 02:18:26 -- common/autotest_common.sh@819 -- # '[' -z 82666 ']' 00:21:11.930 02:18:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:11.930 02:18:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:11.930 02:18:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:11.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:11.930 02:18:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:11.930 02:18:26 -- common/autotest_common.sh@10 -- # set +x 00:21:12.869 02:18:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:12.869 02:18:27 -- common/autotest_common.sh@852 -- # return 0 00:21:12.869 02:18:27 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:13.435 NVMe0n1 00:21:13.435 02:18:27 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:13.693 00:21:13.693 02:18:28 -- host/failover.sh@39 -- # run_test_pid=82723 00:21:13.693 02:18:28 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:13.694 02:18:28 -- host/failover.sh@41 -- # sleep 1 00:21:14.630 02:18:29 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:14.890 [2024-05-14 02:18:29.326550] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7ec20 is same with the state(5) to be set 00:21:14.890 [2024-05-14 02:18:29.326622] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7ec20 is same with the state(5) to be set 00:21:14.890 [2024-05-14 02:18:29.326633] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7ec20 is same with the state(5) to be set 00:21:14.890 [2024-05-14 02:18:29.326642] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7ec20 is same with the state(5) to be set 00:21:14.890 [2024-05-14 02:18:29.326649] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7ec20 is same with the state(5) to be set 00:21:14.890 [2024-05-14 02:18:29.326658] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7ec20 is same with the state(5) to be set 00:21:14.890 [2024-05-14 02:18:29.326666] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7ec20 is same with the state(5) to be set 00:21:14.890 [2024-05-14 02:18:29.326674] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7ec20 is same with the state(5) to be set 00:21:14.890 [2024-05-14 02:18:29.326681] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7ec20 is same with the state(5) to be set 00:21:14.890 [2024-05-14 02:18:29.326689] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7ec20 is same with the state(5) to be set 00:21:14.890 [2024-05-14 02:18:29.326697] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7ec20 is same with the state(5) to be set 00:21:14.890 [2024-05-14 02:18:29.326704] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7ec20 is same with the state(5) to be set 00:21:14.890 [2024-05-14 02:18:29.326712] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7ec20 is same with the state(5) to be set 00:21:14.890 [2024-05-14 02:18:29.326720] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7ec20 is same with the state(5) to be set 00:21:14.890 [2024-05-14 02:18:29.326727] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7ec20 is same with the state(5) to be set 00:21:14.890 [2024-05-14 02:18:29.326734] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7ec20 is same with the state(5) to be set 00:21:14.890 [2024-05-14 02:18:29.326742] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7ec20 is same with the state(5) to be set 00:21:14.890 [2024-05-14 02:18:29.326749] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7ec20 is same with the state(5) to be set 00:21:14.890 [2024-05-14 02:18:29.326757] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7ec20 is same with the state(5) to be set 00:21:14.890 [2024-05-14 02:18:29.326794] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7ec20 is same with the state(5) to be set 00:21:14.890 [2024-05-14 02:18:29.326803] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7ec20 is same with the state(5) to be set 00:21:14.890 [2024-05-14 02:18:29.326811] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7ec20 is same with the state(5) to be set 00:21:14.890 [2024-05-14 02:18:29.326819] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7ec20 is same with the state(5) to be set 00:21:14.890 [2024-05-14 02:18:29.326828] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7ec20 is same with the state(5) to be set 00:21:14.890 [2024-05-14 02:18:29.326836] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7ec20 is same with the state(5) to be set 00:21:14.890 [2024-05-14 02:18:29.326844] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7ec20 is same with the state(5) to be set 00:21:14.890 [2024-05-14 02:18:29.326853] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7ec20 is same with the state(5) to be set 00:21:14.890 [2024-05-14 02:18:29.326861] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7ec20 is same with the state(5) to be set 00:21:14.890 [2024-05-14 02:18:29.326869] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7ec20 is same with the state(5) to be set 00:21:14.890 [2024-05-14 02:18:29.326877] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7ec20 is same with the state(5) to be set 00:21:14.890 [2024-05-14 02:18:29.326885] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7ec20 is same with the state(5) to be set 00:21:14.890 [2024-05-14 02:18:29.326894] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7ec20 is same with the state(5) to be set 00:21:14.890 [2024-05-14 02:18:29.326903] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7ec20 is same with the state(5) to be set 00:21:14.890 [2024-05-14 02:18:29.326911] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7ec20 is same with the state(5) to be set 00:21:14.890 [2024-05-14 02:18:29.326920] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7ec20 is same with the state(5) to be set 00:21:14.890 [2024-05-14 02:18:29.326928] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7ec20 is same with the state(5) to be set 00:21:14.890 [2024-05-14 02:18:29.326936] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7ec20 is same with the state(5) to be set 00:21:14.890 [2024-05-14 02:18:29.326945] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7ec20 is same with the state(5) to be set 00:21:14.890 [2024-05-14 02:18:29.326953] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7ec20 is same with the state(5) to be set 00:21:14.890 [2024-05-14 02:18:29.326961] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7ec20 is same with the state(5) to be set 00:21:14.890 [2024-05-14 02:18:29.326969] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7ec20 is same with the state(5) to be set 00:21:14.890 [2024-05-14 02:18:29.326977] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7ec20 is same with the state(5) to be set 00:21:14.890 [2024-05-14 02:18:29.326985] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7ec20 is same with the state(5) to be set 00:21:14.890 [2024-05-14 02:18:29.326994] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7ec20 is same with the state(5) to be set 00:21:14.890 [2024-05-14 02:18:29.327002] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7ec20 is same with the state(5) to be set 00:21:14.890 [2024-05-14 02:18:29.327011] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7ec20 is same with the state(5) to be set 00:21:14.890 [2024-05-14 02:18:29.327019] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7ec20 is same with the state(5) to be set 00:21:14.890 [2024-05-14 02:18:29.327027] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7ec20 is same with the state(5) to be set 00:21:14.890 [2024-05-14 02:18:29.327036] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7ec20 is same with the state(5) to be set 00:21:14.890 [2024-05-14 02:18:29.327044] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7ec20 is same with the state(5) to be set 00:21:14.890 [2024-05-14 02:18:29.327052] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7ec20 is same with the state(5) to be set 00:21:14.890 [2024-05-14 02:18:29.327060] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7ec20 is same with the state(5) to be set 00:21:14.890 [2024-05-14 02:18:29.327068] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7ec20 is same with the state(5) to be set 00:21:14.890 [2024-05-14 02:18:29.327076] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7ec20 is same with the state(5) to be set 00:21:14.890 [2024-05-14 02:18:29.327084] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7ec20 is same with the state(5) to be set 00:21:14.890 02:18:29 -- host/failover.sh@45 -- # sleep 3 00:21:18.195 02:18:32 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:18.195 00:21:18.195 02:18:32 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:18.455 [2024-05-14 02:18:32.958675] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.455 [2024-05-14 02:18:32.958725] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.455 [2024-05-14 02:18:32.958737] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.455 [2024-05-14 02:18:32.958747] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.455 [2024-05-14 02:18:32.958755] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.455 [2024-05-14 02:18:32.958779] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.455 [2024-05-14 02:18:32.958789] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.455 [2024-05-14 02:18:32.958797] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.455 [2024-05-14 02:18:32.958806] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.455 [2024-05-14 02:18:32.958814] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.455 [2024-05-14 02:18:32.958822] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.455 [2024-05-14 02:18:32.958831] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.455 [2024-05-14 02:18:32.958840] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.455 [2024-05-14 02:18:32.958850] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.455 [2024-05-14 02:18:32.958858] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.455 [2024-05-14 02:18:32.958866] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.455 [2024-05-14 02:18:32.958874] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.455 [2024-05-14 02:18:32.958889] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.455 [2024-05-14 02:18:32.958897] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.455 [2024-05-14 02:18:32.958905] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.455 [2024-05-14 02:18:32.958913] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.455 [2024-05-14 02:18:32.958921] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.455 [2024-05-14 02:18:32.958929] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.455 [2024-05-14 02:18:32.958938] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.455 [2024-05-14 02:18:32.958946] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.455 [2024-05-14 02:18:32.958954] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.455 [2024-05-14 02:18:32.958963] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.455 [2024-05-14 02:18:32.958971] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.455 [2024-05-14 02:18:32.958979] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.455 [2024-05-14 02:18:32.958987] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.455 [2024-05-14 02:18:32.958996] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.455 [2024-05-14 02:18:32.959004] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.455 [2024-05-14 02:18:32.959012] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.455 [2024-05-14 02:18:32.959020] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.455 [2024-05-14 02:18:32.959028] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.455 [2024-05-14 02:18:32.959036] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.455 [2024-05-14 02:18:32.959044] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.455 [2024-05-14 02:18:32.959052] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.455 [2024-05-14 02:18:32.959060] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.455 [2024-05-14 02:18:32.959068] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.455 [2024-05-14 02:18:32.959076] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.455 [2024-05-14 02:18:32.959084] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.455 [2024-05-14 02:18:32.959092] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.455 [2024-05-14 02:18:32.959100] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.455 [2024-05-14 02:18:32.959110] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.455 [2024-05-14 02:18:32.959118] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.455 [2024-05-14 02:18:32.959126] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.455 [2024-05-14 02:18:32.959135] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.455 [2024-05-14 02:18:32.959143] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.455 [2024-05-14 02:18:32.959151] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.455 [2024-05-14 02:18:32.959160] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.455 [2024-05-14 02:18:32.959168] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.455 [2024-05-14 02:18:32.959176] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.455 [2024-05-14 02:18:32.959184] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.455 [2024-05-14 02:18:32.959192] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.455 [2024-05-14 02:18:32.959200] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.455 [2024-05-14 02:18:32.959208] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.456 [2024-05-14 02:18:32.959216] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.456 [2024-05-14 02:18:32.959224] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.456 [2024-05-14 02:18:32.959232] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.456 [2024-05-14 02:18:32.959240] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.456 [2024-05-14 02:18:32.959248] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.456 [2024-05-14 02:18:32.959256] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.456 [2024-05-14 02:18:32.959264] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.456 [2024-05-14 02:18:32.959272] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.456 [2024-05-14 02:18:32.959280] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.456 [2024-05-14 02:18:32.959288] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.456 [2024-05-14 02:18:32.959296] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.456 [2024-05-14 02:18:32.959304] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.456 [2024-05-14 02:18:32.959312] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.456 [2024-05-14 02:18:32.959320] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.456 [2024-05-14 02:18:32.959328] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.456 [2024-05-14 02:18:32.959336] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.456 [2024-05-14 02:18:32.959345] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.456 [2024-05-14 02:18:32.959353] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.456 [2024-05-14 02:18:32.959361] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.456 [2024-05-14 02:18:32.959369] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.456 [2024-05-14 02:18:32.959378] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.456 [2024-05-14 02:18:32.959386] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.456 [2024-05-14 02:18:32.959394] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.456 [2024-05-14 02:18:32.959404] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.456 [2024-05-14 02:18:32.959412] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.456 [2024-05-14 02:18:32.959420] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fb20 is same with the state(5) to be set 00:21:18.456 02:18:32 -- host/failover.sh@50 -- # sleep 3 00:21:21.742 02:18:35 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:21.742 [2024-05-14 02:18:36.241039] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:21.742 02:18:36 -- host/failover.sh@55 -- # sleep 1 00:21:22.678 02:18:37 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:22.937 [2024-05-14 02:18:37.512926] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1033f80 is same with the state(5) to be set 00:21:22.937 [2024-05-14 02:18:37.512979] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1033f80 is same with the state(5) to be set 00:21:22.937 [2024-05-14 02:18:37.512991] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1033f80 is same with the state(5) to be set 00:21:22.937 [2024-05-14 02:18:37.512999] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1033f80 is same with the state(5) to be set 00:21:22.937 [2024-05-14 02:18:37.513008] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1033f80 is same with the state(5) to be set 00:21:22.937 [2024-05-14 02:18:37.513017] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1033f80 is same with the state(5) to be set 00:21:22.937 [2024-05-14 02:18:37.513025] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1033f80 is same with the state(5) to be set 00:21:22.937 [2024-05-14 02:18:37.513033] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1033f80 is same with the state(5) to be set 00:21:22.937 [2024-05-14 02:18:37.513041] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1033f80 is same with the state(5) to be set 00:21:22.937 [2024-05-14 02:18:37.513050] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1033f80 is same with the state(5) to be set 00:21:22.937 [2024-05-14 02:18:37.513058] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1033f80 is same with the state(5) to be set 00:21:22.937 [2024-05-14 02:18:37.513066] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1033f80 is same with the state(5) to be set 00:21:22.937 [2024-05-14 02:18:37.513074] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1033f80 is same with the state(5) to be set 00:21:22.938 [2024-05-14 02:18:37.513082] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1033f80 is same with the state(5) to be set 00:21:22.938 [2024-05-14 02:18:37.513090] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1033f80 is same with the state(5) to be set 00:21:22.938 [2024-05-14 02:18:37.513098] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1033f80 is same with the state(5) to be set 00:21:22.938 [2024-05-14 02:18:37.513106] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1033f80 is same with the state(5) to be set 00:21:22.938 [2024-05-14 02:18:37.513114] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1033f80 is same with the state(5) to be set 00:21:22.938 [2024-05-14 02:18:37.513122] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1033f80 is same with the state(5) to be set 00:21:22.938 [2024-05-14 02:18:37.513131] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1033f80 is same with the state(5) to be set 00:21:22.938 [2024-05-14 02:18:37.513139] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1033f80 is same with the state(5) to be set 00:21:22.938 [2024-05-14 02:18:37.513147] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1033f80 is same with the state(5) to be set 00:21:22.938 [2024-05-14 02:18:37.513155] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1033f80 is same with the state(5) to be set 00:21:22.938 [2024-05-14 02:18:37.513163] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1033f80 is same with the state(5) to be set 00:21:22.938 [2024-05-14 02:18:37.513171] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1033f80 is same with the state(5) to be set 00:21:22.938 [2024-05-14 02:18:37.513179] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1033f80 is same with the state(5) to be set 00:21:22.938 [2024-05-14 02:18:37.513188] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1033f80 is same with the state(5) to be set 00:21:22.938 [2024-05-14 02:18:37.513196] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1033f80 is same with the state(5) to be set 00:21:22.938 [2024-05-14 02:18:37.513204] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1033f80 is same with the state(5) to be set 00:21:22.938 [2024-05-14 02:18:37.513212] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1033f80 is same with the state(5) to be set 00:21:22.938 [2024-05-14 02:18:37.513220] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1033f80 is same with the state(5) to be set 00:21:22.938 [2024-05-14 02:18:37.513228] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1033f80 is same with the state(5) to be set 00:21:22.938 [2024-05-14 02:18:37.513237] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1033f80 is same with the state(5) to be set 00:21:22.938 [2024-05-14 02:18:37.513245] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1033f80 is same with the state(5) to be set 00:21:22.938 [2024-05-14 02:18:37.513253] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1033f80 is same with the state(5) to be set 00:21:22.938 [2024-05-14 02:18:37.513262] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1033f80 is same with the state(5) to be set 00:21:22.938 [2024-05-14 02:18:37.513270] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1033f80 is same with the state(5) to be set 00:21:22.938 [2024-05-14 02:18:37.513278] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1033f80 is same with the state(5) to be set 00:21:22.938 [2024-05-14 02:18:37.513288] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1033f80 is same with the state(5) to be set 00:21:22.938 [2024-05-14 02:18:37.513297] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1033f80 is same with the state(5) to be set 00:21:22.938 [2024-05-14 02:18:37.513305] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1033f80 is same with the state(5) to be set 00:21:22.938 [2024-05-14 02:18:37.513313] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1033f80 is same with the state(5) to be set 00:21:22.938 [2024-05-14 02:18:37.513322] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1033f80 is same with the state(5) to be set 00:21:22.938 [2024-05-14 02:18:37.513330] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1033f80 is same with the state(5) to be set 00:21:22.938 [2024-05-14 02:18:37.513338] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1033f80 is same with the state(5) to be set 00:21:22.938 [2024-05-14 02:18:37.513346] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1033f80 is same with the state(5) to be set 00:21:22.938 [2024-05-14 02:18:37.513354] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1033f80 is same with the state(5) to be set 00:21:22.938 [2024-05-14 02:18:37.513362] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1033f80 is same with the state(5) to be set 00:21:22.938 [2024-05-14 02:18:37.513370] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1033f80 is same with the state(5) to be set 00:21:22.938 [2024-05-14 02:18:37.513377] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1033f80 is same with the state(5) to be set 00:21:22.938 [2024-05-14 02:18:37.513385] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1033f80 is same with the state(5) to be set 00:21:22.938 [2024-05-14 02:18:37.513393] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1033f80 is same with the state(5) to be set 00:21:22.938 [2024-05-14 02:18:37.513401] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1033f80 is same with the state(5) to be set 00:21:22.938 [2024-05-14 02:18:37.513409] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1033f80 is same with the state(5) to be set 00:21:22.938 [2024-05-14 02:18:37.513417] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1033f80 is same with the state(5) to be set 00:21:22.938 [2024-05-14 02:18:37.513426] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1033f80 is same with the state(5) to be set 00:21:22.938 [2024-05-14 02:18:37.513434] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1033f80 is same with the state(5) to be set 00:21:22.938 [2024-05-14 02:18:37.513442] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1033f80 is same with the state(5) to be set 00:21:22.938 [2024-05-14 02:18:37.513450] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1033f80 is same with the state(5) to be set 00:21:22.938 [2024-05-14 02:18:37.513457] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1033f80 is same with the state(5) to be set 00:21:22.938 [2024-05-14 02:18:37.513465] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1033f80 is same with the state(5) to be set 00:21:23.197 02:18:37 -- host/failover.sh@59 -- # wait 82723 00:21:29.765 0 00:21:29.765 02:18:43 -- host/failover.sh@61 -- # killprocess 82666 00:21:29.765 02:18:43 -- common/autotest_common.sh@926 -- # '[' -z 82666 ']' 00:21:29.765 02:18:43 -- common/autotest_common.sh@930 -- # kill -0 82666 00:21:29.765 02:18:43 -- common/autotest_common.sh@931 -- # uname 00:21:29.765 02:18:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:29.765 02:18:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 82666 00:21:29.765 killing process with pid 82666 00:21:29.765 02:18:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:29.765 02:18:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:29.765 02:18:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 82666' 00:21:29.765 02:18:43 -- common/autotest_common.sh@945 -- # kill 82666 00:21:29.765 02:18:43 -- common/autotest_common.sh@950 -- # wait 82666 00:21:29.765 02:18:43 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:29.765 [2024-05-14 02:18:26.496507] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:29.765 [2024-05-14 02:18:26.496629] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82666 ] 00:21:29.765 [2024-05-14 02:18:26.631336] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:29.765 [2024-05-14 02:18:26.698974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:29.765 Running I/O for 15 seconds... 00:21:29.765 [2024-05-14 02:18:29.327295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:119448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.765 [2024-05-14 02:18:29.327348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-05-14 02:18:29.327378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:119464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.765 [2024-05-14 02:18:29.327394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-05-14 02:18:29.327410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:119480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.765 [2024-05-14 02:18:29.327423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-05-14 02:18:29.327438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:118976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.765 [2024-05-14 02:18:29.327452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-05-14 02:18:29.327467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:118984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.765 [2024-05-14 02:18:29.327481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-05-14 02:18:29.327496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:118992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.765 [2024-05-14 02:18:29.327509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-05-14 02:18:29.327524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:119000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.765 [2024-05-14 02:18:29.327538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-05-14 02:18:29.327552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:119008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.765 [2024-05-14 02:18:29.327565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-05-14 02:18:29.327580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:119024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.765 [2024-05-14 02:18:29.327593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-05-14 02:18:29.327609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:119032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.765 [2024-05-14 02:18:29.327638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-05-14 02:18:29.327654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:119040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.765 [2024-05-14 02:18:29.327667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-05-14 02:18:29.327706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:119056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-05-14 02:18:29.327721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-05-14 02:18:29.327737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:119064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-05-14 02:18:29.327750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-05-14 02:18:29.327766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:119072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-05-14 02:18:29.327780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-05-14 02:18:29.327809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:119096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-05-14 02:18:29.327827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-05-14 02:18:29.327843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:119120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-05-14 02:18:29.327857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-05-14 02:18:29.327872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:119128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-05-14 02:18:29.327892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-05-14 02:18:29.327909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:119136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-05-14 02:18:29.327922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-05-14 02:18:29.327939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:119144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-05-14 02:18:29.327953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-05-14 02:18:29.327968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:119528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-05-14 02:18:29.327982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-05-14 02:18:29.327997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:119536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-05-14 02:18:29.328011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-05-14 02:18:29.328026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:119544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-05-14 02:18:29.328040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-05-14 02:18:29.328055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:119560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-05-14 02:18:29.328069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-05-14 02:18:29.328084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:119568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-05-14 02:18:29.328107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-05-14 02:18:29.328124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:119600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-05-14 02:18:29.328137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-05-14 02:18:29.328153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:119624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-05-14 02:18:29.328167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-05-14 02:18:29.328183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:119640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-05-14 02:18:29.328211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-05-14 02:18:29.328226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:119648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-05-14 02:18:29.328239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-05-14 02:18:29.328254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:119656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-05-14 02:18:29.328268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-05-14 02:18:29.328284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:119664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.766 [2024-05-14 02:18:29.328297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-05-14 02:18:29.328312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:119672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.766 [2024-05-14 02:18:29.328326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-05-14 02:18:29.328341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:119680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.766 [2024-05-14 02:18:29.328354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-05-14 02:18:29.328369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:119688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-05-14 02:18:29.328386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-05-14 02:18:29.328401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:119696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.766 [2024-05-14 02:18:29.328414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-05-14 02:18:29.328430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:119704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.766 [2024-05-14 02:18:29.328444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-05-14 02:18:29.328460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:119712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-05-14 02:18:29.328473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-05-14 02:18:29.328495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:119720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.766 [2024-05-14 02:18:29.328509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-05-14 02:18:29.328524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:119728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-05-14 02:18:29.328538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-05-14 02:18:29.328553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:119736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.766 [2024-05-14 02:18:29.328567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-05-14 02:18:29.328582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:119744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-05-14 02:18:29.328595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-05-14 02:18:29.328610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:119752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-05-14 02:18:29.328624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-05-14 02:18:29.328638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:119760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.766 [2024-05-14 02:18:29.328652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-05-14 02:18:29.328667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.766 [2024-05-14 02:18:29.328680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-05-14 02:18:29.328696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:119776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-05-14 02:18:29.328709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-05-14 02:18:29.328724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:119784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-05-14 02:18:29.328738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-05-14 02:18:29.328754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:119792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.766 [2024-05-14 02:18:29.328784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-05-14 02:18:29.328812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:119800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.766 [2024-05-14 02:18:29.328828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-05-14 02:18:29.328844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:119808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.766 [2024-05-14 02:18:29.328857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-05-14 02:18:29.328873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:119816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-05-14 02:18:29.328900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-05-14 02:18:29.328918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:119824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-05-14 02:18:29.328932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-05-14 02:18:29.328948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:119832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-05-14 02:18:29.328962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-05-14 02:18:29.328977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:119840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.766 [2024-05-14 02:18:29.328991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-05-14 02:18:29.329006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:119848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.767 [2024-05-14 02:18:29.329020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-05-14 02:18:29.329036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:119856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.767 [2024-05-14 02:18:29.329049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-05-14 02:18:29.329065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:119864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.767 [2024-05-14 02:18:29.329088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-05-14 02:18:29.329103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:119872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.767 [2024-05-14 02:18:29.329117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-05-14 02:18:29.329133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:119880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.767 [2024-05-14 02:18:29.329146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-05-14 02:18:29.329162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:119888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.767 [2024-05-14 02:18:29.329175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-05-14 02:18:29.329191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:119896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.767 [2024-05-14 02:18:29.329205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-05-14 02:18:29.329221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:119904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.767 [2024-05-14 02:18:29.329235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-05-14 02:18:29.329251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:119912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.767 [2024-05-14 02:18:29.329264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-05-14 02:18:29.329280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:119920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.767 [2024-05-14 02:18:29.329300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-05-14 02:18:29.329317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:119928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.767 [2024-05-14 02:18:29.329330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-05-14 02:18:29.329346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:119168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.767 [2024-05-14 02:18:29.329360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-05-14 02:18:29.329376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:119184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.767 [2024-05-14 02:18:29.329391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-05-14 02:18:29.329408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:119200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.767 [2024-05-14 02:18:29.329422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-05-14 02:18:29.329437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:119208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.767 [2024-05-14 02:18:29.329451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-05-14 02:18:29.329467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:119216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.767 [2024-05-14 02:18:29.329481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-05-14 02:18:29.329499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:119240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.767 [2024-05-14 02:18:29.329514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-05-14 02:18:29.329530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:119248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.767 [2024-05-14 02:18:29.329544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-05-14 02:18:29.329560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:119256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.767 [2024-05-14 02:18:29.329574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-05-14 02:18:29.329589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:119280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.767 [2024-05-14 02:18:29.329603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-05-14 02:18:29.329618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:119288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.767 [2024-05-14 02:18:29.329632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-05-14 02:18:29.329648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:119304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.767 [2024-05-14 02:18:29.329662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-05-14 02:18:29.329684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:119312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.767 [2024-05-14 02:18:29.329698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-05-14 02:18:29.329715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:119320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.767 [2024-05-14 02:18:29.329728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-05-14 02:18:29.329744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:119344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.767 [2024-05-14 02:18:29.329758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-05-14 02:18:29.329788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:119400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.767 [2024-05-14 02:18:29.329803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-05-14 02:18:29.329819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:119408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.767 [2024-05-14 02:18:29.329833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-05-14 02:18:29.329848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:119936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.767 [2024-05-14 02:18:29.329862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-05-14 02:18:29.329890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:119944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.767 [2024-05-14 02:18:29.329907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-05-14 02:18:29.329923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:119952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.767 [2024-05-14 02:18:29.329938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-05-14 02:18:29.329954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:119960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.767 [2024-05-14 02:18:29.329967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-05-14 02:18:29.329983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:119968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.767 [2024-05-14 02:18:29.329996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-05-14 02:18:29.330014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:119976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.767 [2024-05-14 02:18:29.330029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-05-14 02:18:29.330045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:119984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.767 [2024-05-14 02:18:29.330059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-05-14 02:18:29.330075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:119992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.767 [2024-05-14 02:18:29.330096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-05-14 02:18:29.330113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:120000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.767 [2024-05-14 02:18:29.330127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-05-14 02:18:29.330142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:120008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.767 [2024-05-14 02:18:29.330156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-05-14 02:18:29.330172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:120016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.767 [2024-05-14 02:18:29.330186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-05-14 02:18:29.330201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:120024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.767 [2024-05-14 02:18:29.330215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-05-14 02:18:29.330230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:120032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.767 [2024-05-14 02:18:29.330244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-05-14 02:18:29.330260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:120040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.768 [2024-05-14 02:18:29.330274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.768 [2024-05-14 02:18:29.330289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:119432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.768 [2024-05-14 02:18:29.330303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.768 [2024-05-14 02:18:29.330319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:119440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.768 [2024-05-14 02:18:29.330333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.768 [2024-05-14 02:18:29.330348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:119456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.768 [2024-05-14 02:18:29.330362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.768 [2024-05-14 02:18:29.330378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:119472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.768 [2024-05-14 02:18:29.330393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.768 [2024-05-14 02:18:29.330410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:119488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.768 [2024-05-14 02:18:29.330424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.768 [2024-05-14 02:18:29.330439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:119496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.768 [2024-05-14 02:18:29.330453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.768 [2024-05-14 02:18:29.330475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:119504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.768 [2024-05-14 02:18:29.330490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.768 [2024-05-14 02:18:29.330508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:119512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.768 [2024-05-14 02:18:29.330522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.768 [2024-05-14 02:18:29.330538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:120048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.768 [2024-05-14 02:18:29.330551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.768 [2024-05-14 02:18:29.330567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:120056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.768 [2024-05-14 02:18:29.330581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.768 [2024-05-14 02:18:29.330597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:120064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.768 [2024-05-14 02:18:29.330611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.768 [2024-05-14 02:18:29.330626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:120072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.768 [2024-05-14 02:18:29.330640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.768 [2024-05-14 02:18:29.330656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:120080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.768 [2024-05-14 02:18:29.330670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.768 [2024-05-14 02:18:29.330685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:120088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.768 [2024-05-14 02:18:29.330699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.768 [2024-05-14 02:18:29.330715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:120096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.768 [2024-05-14 02:18:29.330728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.768 [2024-05-14 02:18:29.330744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:120104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.768 [2024-05-14 02:18:29.330758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.768 [2024-05-14 02:18:29.330786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:120112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.768 [2024-05-14 02:18:29.330801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.768 [2024-05-14 02:18:29.330817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:120120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.768 [2024-05-14 02:18:29.330830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.768 [2024-05-14 02:18:29.330846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:120128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.768 [2024-05-14 02:18:29.330867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.768 [2024-05-14 02:18:29.330884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:120136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.768 [2024-05-14 02:18:29.330900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.768 [2024-05-14 02:18:29.330917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:120144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.768 [2024-05-14 02:18:29.330930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.768 [2024-05-14 02:18:29.330946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:120152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.768 [2024-05-14 02:18:29.330960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.768 [2024-05-14 02:18:29.330976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:120160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.768 [2024-05-14 02:18:29.330989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.768 [2024-05-14 02:18:29.331007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:120168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.768 [2024-05-14 02:18:29.331021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.768 [2024-05-14 02:18:29.331037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:120176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.768 [2024-05-14 02:18:29.331051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.768 [2024-05-14 02:18:29.331067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:120184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.768 [2024-05-14 02:18:29.331080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.768 [2024-05-14 02:18:29.331097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:120192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.768 [2024-05-14 02:18:29.331111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.768 [2024-05-14 02:18:29.331127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:119520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.768 [2024-05-14 02:18:29.331141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.768 [2024-05-14 02:18:29.331156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:119552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.768 [2024-05-14 02:18:29.331170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.768 [2024-05-14 02:18:29.331186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:119576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.768 [2024-05-14 02:18:29.331201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.768 [2024-05-14 02:18:29.331216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:119584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.768 [2024-05-14 02:18:29.331230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.768 [2024-05-14 02:18:29.331252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:119592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.768 [2024-05-14 02:18:29.331266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.768 [2024-05-14 02:18:29.331282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:119608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.768 [2024-05-14 02:18:29.331296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.768 [2024-05-14 02:18:29.331312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:119616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.768 [2024-05-14 02:18:29.331326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.768 [2024-05-14 02:18:29.331341] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4fc0 is same with the state(5) to be set 00:21:29.768 [2024-05-14 02:18:29.331360] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.768 [2024-05-14 02:18:29.331371] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.768 [2024-05-14 02:18:29.331384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119632 len:8 PRP1 0x0 PRP2 0x0 00:21:29.768 [2024-05-14 02:18:29.331398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.768 [2024-05-14 02:18:29.331454] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xdb4fc0 was disconnected and freed. reset controller. 00:21:29.768 [2024-05-14 02:18:29.331474] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:29.768 [2024-05-14 02:18:29.331538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:29.768 [2024-05-14 02:18:29.331559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.768 [2024-05-14 02:18:29.331574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:29.768 [2024-05-14 02:18:29.331591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.768 [2024-05-14 02:18:29.331607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:29.768 [2024-05-14 02:18:29.331620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.768 [2024-05-14 02:18:29.331635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:29.769 [2024-05-14 02:18:29.331648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.769 [2024-05-14 02:18:29.331662] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:29.769 [2024-05-14 02:18:29.331709] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd48010 (9): Bad file descriptor 00:21:29.769 [2024-05-14 02:18:29.334222] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:29.769 [2024-05-14 02:18:29.367160] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:29.769 [2024-05-14 02:18:32.959539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:106128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.769 [2024-05-14 02:18:32.959583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.769 [2024-05-14 02:18:32.959631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:106144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.769 [2024-05-14 02:18:32.959649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.769 [2024-05-14 02:18:32.959665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:106160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.769 [2024-05-14 02:18:32.959679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.769 [2024-05-14 02:18:32.959695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:106176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.769 [2024-05-14 02:18:32.959709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.769 [2024-05-14 02:18:32.959725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:106192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.769 [2024-05-14 02:18:32.959739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.769 [2024-05-14 02:18:32.959755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:106200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.769 [2024-05-14 02:18:32.959787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.769 [2024-05-14 02:18:32.959806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:106248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.769 [2024-05-14 02:18:32.959820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.769 [2024-05-14 02:18:32.959836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:106264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.769 [2024-05-14 02:18:32.959850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.769 [2024-05-14 02:18:32.959866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:106288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.769 [2024-05-14 02:18:32.959880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.769 [2024-05-14 02:18:32.959896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:106296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.769 [2024-05-14 02:18:32.959910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.769 [2024-05-14 02:18:32.959926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:106320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.769 [2024-05-14 02:18:32.959940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.769 [2024-05-14 02:18:32.959956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:106328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.769 [2024-05-14 02:18:32.959970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.769 [2024-05-14 02:18:32.959986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:106344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.769 [2024-05-14 02:18:32.959999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.769 [2024-05-14 02:18:32.960015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:106352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.769 [2024-05-14 02:18:32.960037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.769 [2024-05-14 02:18:32.960054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:106360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.769 [2024-05-14 02:18:32.960068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.769 [2024-05-14 02:18:32.960084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:106368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.769 [2024-05-14 02:18:32.960098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.769 [2024-05-14 02:18:32.960115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:106376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.769 [2024-05-14 02:18:32.960129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.769 [2024-05-14 02:18:32.960145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.769 [2024-05-14 02:18:32.960160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.769 [2024-05-14 02:18:32.960175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:105560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.769 [2024-05-14 02:18:32.960189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.769 [2024-05-14 02:18:32.960232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.769 [2024-05-14 02:18:32.960247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.769 [2024-05-14 02:18:32.960263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:105600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.769 [2024-05-14 02:18:32.960277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.769 [2024-05-14 02:18:32.960293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:105616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.769 [2024-05-14 02:18:32.960307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.769 [2024-05-14 02:18:32.960324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.769 [2024-05-14 02:18:32.960337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.769 [2024-05-14 02:18:32.960353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:105648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.769 [2024-05-14 02:18:32.960366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.769 [2024-05-14 02:18:32.960382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:105656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.769 [2024-05-14 02:18:32.960396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.769 [2024-05-14 02:18:32.960412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:105664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.769 [2024-05-14 02:18:32.960426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.769 [2024-05-14 02:18:32.960442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:105672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.769 [2024-05-14 02:18:32.960464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.769 [2024-05-14 02:18:32.960481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:105680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.769 [2024-05-14 02:18:32.960495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.769 [2024-05-14 02:18:32.960511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:105688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.769 [2024-05-14 02:18:32.960524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.769 [2024-05-14 02:18:32.960540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:105696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.769 [2024-05-14 02:18:32.960554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.769 [2024-05-14 02:18:32.960570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.769 [2024-05-14 02:18:32.960594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.769 [2024-05-14 02:18:32.960610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:105728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.769 [2024-05-14 02:18:32.960624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-05-14 02:18:32.960639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:105736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.770 [2024-05-14 02:18:32.960657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-05-14 02:18:32.960673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.770 [2024-05-14 02:18:32.960687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-05-14 02:18:32.960702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:106392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.770 [2024-05-14 02:18:32.960716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-05-14 02:18:32.960732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:106400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.770 [2024-05-14 02:18:32.960746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-05-14 02:18:32.960774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:106408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.770 [2024-05-14 02:18:32.960792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-05-14 02:18:32.960808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:106416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.770 [2024-05-14 02:18:32.960822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-05-14 02:18:32.960838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:106424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.770 [2024-05-14 02:18:32.960852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-05-14 02:18:32.960874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:106432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.770 [2024-05-14 02:18:32.960889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-05-14 02:18:32.960905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:106440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.770 [2024-05-14 02:18:32.960919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-05-14 02:18:32.960935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:106448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.770 [2024-05-14 02:18:32.960949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-05-14 02:18:32.960965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:106456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.770 [2024-05-14 02:18:32.960980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-05-14 02:18:32.960996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:106464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.770 [2024-05-14 02:18:32.961010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-05-14 02:18:32.961026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:105768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.770 [2024-05-14 02:18:32.961040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-05-14 02:18:32.961056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:105800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.770 [2024-05-14 02:18:32.961070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-05-14 02:18:32.961086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:105816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.770 [2024-05-14 02:18:32.961100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-05-14 02:18:32.961115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.770 [2024-05-14 02:18:32.961129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-05-14 02:18:32.961145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:105896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.770 [2024-05-14 02:18:32.961162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-05-14 02:18:32.961179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:105904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.770 [2024-05-14 02:18:32.961192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-05-14 02:18:32.961209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:105912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.770 [2024-05-14 02:18:32.961222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-05-14 02:18:32.961238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:106000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.770 [2024-05-14 02:18:32.961259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-05-14 02:18:32.961275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:106016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.770 [2024-05-14 02:18:32.961289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-05-14 02:18:32.961305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:106024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.770 [2024-05-14 02:18:32.961319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-05-14 02:18:32.961335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:106040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.770 [2024-05-14 02:18:32.961348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-05-14 02:18:32.961364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:106064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.770 [2024-05-14 02:18:32.961378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-05-14 02:18:32.961394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:106072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.770 [2024-05-14 02:18:32.961409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-05-14 02:18:32.961424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:106096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.770 [2024-05-14 02:18:32.961438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-05-14 02:18:32.961454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:106104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.770 [2024-05-14 02:18:32.961468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-05-14 02:18:32.961483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:106472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.770 [2024-05-14 02:18:32.961498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-05-14 02:18:32.961514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:106480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.770 [2024-05-14 02:18:32.961527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-05-14 02:18:32.961543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:106488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.770 [2024-05-14 02:18:32.961557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-05-14 02:18:32.961572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:106496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.770 [2024-05-14 02:18:32.961586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-05-14 02:18:32.961601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:106504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.770 [2024-05-14 02:18:32.961615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-05-14 02:18:32.961637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:106512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.770 [2024-05-14 02:18:32.961653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-05-14 02:18:32.961669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:106520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.770 [2024-05-14 02:18:32.961683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-05-14 02:18:32.961699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:106528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.770 [2024-05-14 02:18:32.961713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-05-14 02:18:32.961729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:106536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.771 [2024-05-14 02:18:32.961743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-05-14 02:18:32.961758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:106544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.771 [2024-05-14 02:18:32.961785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-05-14 02:18:32.961802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:106552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.771 [2024-05-14 02:18:32.961816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-05-14 02:18:32.961834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:106560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.771 [2024-05-14 02:18:32.961849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-05-14 02:18:32.961865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:106568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.771 [2024-05-14 02:18:32.961896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-05-14 02:18:32.961914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:106576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.771 [2024-05-14 02:18:32.961928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-05-14 02:18:32.961943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:106584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.771 [2024-05-14 02:18:32.961957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-05-14 02:18:32.961973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:106592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.771 [2024-05-14 02:18:32.961987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-05-14 02:18:32.962003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:106600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.771 [2024-05-14 02:18:32.962016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-05-14 02:18:32.962032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:106608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.771 [2024-05-14 02:18:32.962054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-05-14 02:18:32.962071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:106616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.771 [2024-05-14 02:18:32.962085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-05-14 02:18:32.962101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:106624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.771 [2024-05-14 02:18:32.962115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-05-14 02:18:32.962130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:106632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.771 [2024-05-14 02:18:32.962144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-05-14 02:18:32.962160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:106640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.771 [2024-05-14 02:18:32.962177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-05-14 02:18:32.962193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:106648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.771 [2024-05-14 02:18:32.962207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-05-14 02:18:32.962223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:106656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.771 [2024-05-14 02:18:32.962237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-05-14 02:18:32.962252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:106664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.771 [2024-05-14 02:18:32.962266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-05-14 02:18:32.962282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:106672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.771 [2024-05-14 02:18:32.962296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-05-14 02:18:32.962311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:106680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.771 [2024-05-14 02:18:32.962325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-05-14 02:18:32.962343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:106688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.771 [2024-05-14 02:18:32.962358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-05-14 02:18:32.962374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:106696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.771 [2024-05-14 02:18:32.962388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-05-14 02:18:32.962403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:106704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.771 [2024-05-14 02:18:32.962417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-05-14 02:18:32.962438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:106712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.771 [2024-05-14 02:18:32.962453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-05-14 02:18:32.962469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:106720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.771 [2024-05-14 02:18:32.962483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-05-14 02:18:32.962499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:106728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.771 [2024-05-14 02:18:32.962512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-05-14 02:18:32.962528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:106736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.771 [2024-05-14 02:18:32.962542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-05-14 02:18:32.962558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.771 [2024-05-14 02:18:32.962571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-05-14 02:18:32.962587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:106752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.771 [2024-05-14 02:18:32.962602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-05-14 02:18:32.962617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:106760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.771 [2024-05-14 02:18:32.962631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-05-14 02:18:32.962647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:106768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.771 [2024-05-14 02:18:32.962661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-05-14 02:18:32.962678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:106776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.771 [2024-05-14 02:18:32.962698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-05-14 02:18:32.962721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:106784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.771 [2024-05-14 02:18:32.962736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-05-14 02:18:32.962752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:106792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.771 [2024-05-14 02:18:32.962777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-05-14 02:18:32.962796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:106800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.771 [2024-05-14 02:18:32.962810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-05-14 02:18:32.962826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:106808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.771 [2024-05-14 02:18:32.962847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-05-14 02:18:32.962867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:106816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.771 [2024-05-14 02:18:32.962882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-05-14 02:18:32.962898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:106824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.771 [2024-05-14 02:18:32.962911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-05-14 02:18:32.962927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:106832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.771 [2024-05-14 02:18:32.962941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-05-14 02:18:32.962956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:106840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.771 [2024-05-14 02:18:32.962970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-05-14 02:18:32.962985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:106848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.771 [2024-05-14 02:18:32.963006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-05-14 02:18:32.963022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:106856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.771 [2024-05-14 02:18:32.963036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-05-14 02:18:32.963052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:106864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.772 [2024-05-14 02:18:32.963065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-05-14 02:18:32.963081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:106872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.772 [2024-05-14 02:18:32.963095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-05-14 02:18:32.963110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:106880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.772 [2024-05-14 02:18:32.963124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-05-14 02:18:32.963140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:106888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.772 [2024-05-14 02:18:32.963154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-05-14 02:18:32.963169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:106120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.772 [2024-05-14 02:18:32.963183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-05-14 02:18:32.963199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:106136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.772 [2024-05-14 02:18:32.963214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-05-14 02:18:32.963229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:106152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.772 [2024-05-14 02:18:32.963249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-05-14 02:18:32.963265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:106168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.772 [2024-05-14 02:18:32.963279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-05-14 02:18:32.963295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:106184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.772 [2024-05-14 02:18:32.963309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-05-14 02:18:32.963324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:106208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.772 [2024-05-14 02:18:32.963337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-05-14 02:18:32.963354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:106216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.772 [2024-05-14 02:18:32.963368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-05-14 02:18:32.963384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:106224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.772 [2024-05-14 02:18:32.963397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-05-14 02:18:32.963413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:106232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.772 [2024-05-14 02:18:32.963427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-05-14 02:18:32.963442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:106240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.772 [2024-05-14 02:18:32.963456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-05-14 02:18:32.963472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.772 [2024-05-14 02:18:32.963485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-05-14 02:18:32.963501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:106272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.772 [2024-05-14 02:18:32.963515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-05-14 02:18:32.963530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:106280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.772 [2024-05-14 02:18:32.963544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-05-14 02:18:32.963559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:106304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.772 [2024-05-14 02:18:32.963573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-05-14 02:18:32.963589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.772 [2024-05-14 02:18:32.963608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-05-14 02:18:32.963629] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6d10 is same with the state(5) to be set 00:21:29.772 [2024-05-14 02:18:32.963647] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.772 [2024-05-14 02:18:32.963658] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.772 [2024-05-14 02:18:32.963670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106336 len:8 PRP1 0x0 PRP2 0x0 00:21:29.772 [2024-05-14 02:18:32.963683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-05-14 02:18:32.963731] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xdb6d10 was disconnected and freed. reset controller. 00:21:29.772 [2024-05-14 02:18:32.963749] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:21:29.772 [2024-05-14 02:18:32.963818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:29.772 [2024-05-14 02:18:32.963841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-05-14 02:18:32.963857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:29.772 [2024-05-14 02:18:32.963870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-05-14 02:18:32.963884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:29.772 [2024-05-14 02:18:32.963898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-05-14 02:18:32.963912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:29.772 [2024-05-14 02:18:32.963926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-05-14 02:18:32.963939] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:29.772 [2024-05-14 02:18:32.963973] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd48010 (9): Bad file descriptor 00:21:29.772 [2024-05-14 02:18:32.966548] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:29.772 [2024-05-14 02:18:32.996433] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:29.772 [2024-05-14 02:18:37.513580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:39240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.772 [2024-05-14 02:18:37.513626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-05-14 02:18:37.513654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:38472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.772 [2024-05-14 02:18:37.513670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-05-14 02:18:37.513686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:38496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.772 [2024-05-14 02:18:37.513700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-05-14 02:18:37.513715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:38520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.772 [2024-05-14 02:18:37.513729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-05-14 02:18:37.513779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:38536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.772 [2024-05-14 02:18:37.513798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-05-14 02:18:37.513814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:38560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.772 [2024-05-14 02:18:37.513828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-05-14 02:18:37.513844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:38568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.772 [2024-05-14 02:18:37.513857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-05-14 02:18:37.513882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:38584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.772 [2024-05-14 02:18:37.513898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-05-14 02:18:37.513914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:38616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.772 [2024-05-14 02:18:37.513927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-05-14 02:18:37.513943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:38624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.772 [2024-05-14 02:18:37.513956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-05-14 02:18:37.513972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:38640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.772 [2024-05-14 02:18:37.513985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-05-14 02:18:37.514000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:38648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.772 [2024-05-14 02:18:37.514014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-05-14 02:18:37.514029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:38656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.772 [2024-05-14 02:18:37.514042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-05-14 02:18:37.514058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:38672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.773 [2024-05-14 02:18:37.514071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-05-14 02:18:37.514086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:38712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.773 [2024-05-14 02:18:37.514100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-05-14 02:18:37.514115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:38736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.773 [2024-05-14 02:18:37.514129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-05-14 02:18:37.514144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:38744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.773 [2024-05-14 02:18:37.514168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-05-14 02:18:37.514184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:39248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.773 [2024-05-14 02:18:37.514198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-05-14 02:18:37.514214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:39256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.773 [2024-05-14 02:18:37.514228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-05-14 02:18:37.514243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:39264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.773 [2024-05-14 02:18:37.514256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-05-14 02:18:37.514272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:39280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.773 [2024-05-14 02:18:37.514285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-05-14 02:18:37.514301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:39288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.773 [2024-05-14 02:18:37.514316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-05-14 02:18:37.514332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:39312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.773 [2024-05-14 02:18:37.514345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-05-14 02:18:37.514361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:39320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.773 [2024-05-14 02:18:37.514375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-05-14 02:18:37.514391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:39328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.773 [2024-05-14 02:18:37.514404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-05-14 02:18:37.514420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:39360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.773 [2024-05-14 02:18:37.514444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-05-14 02:18:37.514459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:39376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.773 [2024-05-14 02:18:37.514473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-05-14 02:18:37.514489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:39384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.773 [2024-05-14 02:18:37.514503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-05-14 02:18:37.514518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:39392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.773 [2024-05-14 02:18:37.514532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-05-14 02:18:37.514554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:39400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.773 [2024-05-14 02:18:37.514569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-05-14 02:18:37.514584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:39408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.773 [2024-05-14 02:18:37.514598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-05-14 02:18:37.514613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.773 [2024-05-14 02:18:37.514626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-05-14 02:18:37.514642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:39424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.773 [2024-05-14 02:18:37.514656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-05-14 02:18:37.514671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:39432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.773 [2024-05-14 02:18:37.514684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-05-14 02:18:37.514700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:39440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.773 [2024-05-14 02:18:37.514713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-05-14 02:18:37.514729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:39448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.773 [2024-05-14 02:18:37.514742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-05-14 02:18:37.514758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:39456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.773 [2024-05-14 02:18:37.514785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-05-14 02:18:37.514802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:39464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.773 [2024-05-14 02:18:37.514817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-05-14 02:18:37.514832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:39472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.773 [2024-05-14 02:18:37.514846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-05-14 02:18:37.514862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.773 [2024-05-14 02:18:37.514875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-05-14 02:18:37.514892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:39488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.773 [2024-05-14 02:18:37.514906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-05-14 02:18:37.514922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:39496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.773 [2024-05-14 02:18:37.514937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-05-14 02:18:37.514960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:39504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.773 [2024-05-14 02:18:37.514975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-05-14 02:18:37.514990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:39512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.773 [2024-05-14 02:18:37.515004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-05-14 02:18:37.515020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:39520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.773 [2024-05-14 02:18:37.515033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-05-14 02:18:37.515049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.773 [2024-05-14 02:18:37.515062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-05-14 02:18:37.515078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:39536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.773 [2024-05-14 02:18:37.515092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-05-14 02:18:37.515107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:39544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.773 [2024-05-14 02:18:37.515121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-05-14 02:18:37.515136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:39552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.773 [2024-05-14 02:18:37.515150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-05-14 02:18:37.515166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:38784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.773 [2024-05-14 02:18:37.515180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-05-14 02:18:37.515196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:38792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.773 [2024-05-14 02:18:37.515210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-05-14 02:18:37.515225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:38808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.773 [2024-05-14 02:18:37.515239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-05-14 02:18:37.515255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:38816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.773 [2024-05-14 02:18:37.515269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-05-14 02:18:37.515285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.773 [2024-05-14 02:18:37.515299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-05-14 02:18:37.515315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:38840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.773 [2024-05-14 02:18:37.515335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.774 [2024-05-14 02:18:37.515351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:38848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.774 [2024-05-14 02:18:37.515365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.774 [2024-05-14 02:18:37.515380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:38864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.774 [2024-05-14 02:18:37.515394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.774 [2024-05-14 02:18:37.515410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:38888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.774 [2024-05-14 02:18:37.515424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.774 [2024-05-14 02:18:37.515440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:38896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.774 [2024-05-14 02:18:37.515454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.774 [2024-05-14 02:18:37.515469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:38936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.774 [2024-05-14 02:18:37.515483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.774 [2024-05-14 02:18:37.515498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:38944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.774 [2024-05-14 02:18:37.515512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.774 [2024-05-14 02:18:37.515527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:38960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.774 [2024-05-14 02:18:37.515541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.774 [2024-05-14 02:18:37.515556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.774 [2024-05-14 02:18:37.515570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.774 [2024-05-14 02:18:37.515585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:38984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.774 [2024-05-14 02:18:37.515599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.774 [2024-05-14 02:18:37.515614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:38992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.774 [2024-05-14 02:18:37.515628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.774 [2024-05-14 02:18:37.515643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:39560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.774 [2024-05-14 02:18:37.515657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.774 [2024-05-14 02:18:37.515672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:39568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.774 [2024-05-14 02:18:37.515686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.774 [2024-05-14 02:18:37.515708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:39576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.774 [2024-05-14 02:18:37.515723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.774 [2024-05-14 02:18:37.515738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:39584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.774 [2024-05-14 02:18:37.515752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.774 [2024-05-14 02:18:37.515780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:39592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.774 [2024-05-14 02:18:37.515796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.774 [2024-05-14 02:18:37.515813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:39008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.774 [2024-05-14 02:18:37.515827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.774 [2024-05-14 02:18:37.515842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:39016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.774 [2024-05-14 02:18:37.515856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.774 [2024-05-14 02:18:37.515871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:39024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.774 [2024-05-14 02:18:37.515885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.774 [2024-05-14 02:18:37.515900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:39032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.774 [2024-05-14 02:18:37.515914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.774 [2024-05-14 02:18:37.515930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:39056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.774 [2024-05-14 02:18:37.515943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.774 [2024-05-14 02:18:37.515959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:39064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.774 [2024-05-14 02:18:37.515973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.774 [2024-05-14 02:18:37.515988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:39080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.774 [2024-05-14 02:18:37.516002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.774 [2024-05-14 02:18:37.516018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:39096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.774 [2024-05-14 02:18:37.516032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.774 [2024-05-14 02:18:37.516047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:39600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.774 [2024-05-14 02:18:37.516061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.774 [2024-05-14 02:18:37.516076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:39608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.774 [2024-05-14 02:18:37.516097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.774 [2024-05-14 02:18:37.516113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:39616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.774 [2024-05-14 02:18:37.516127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.774 [2024-05-14 02:18:37.516143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:39624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.774 [2024-05-14 02:18:37.516156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.774 [2024-05-14 02:18:37.516172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:39632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.774 [2024-05-14 02:18:37.516185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.774 [2024-05-14 02:18:37.516201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:39640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.774 [2024-05-14 02:18:37.516215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.774 [2024-05-14 02:18:37.516230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:39648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.774 [2024-05-14 02:18:37.516244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.774 [2024-05-14 02:18:37.516260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:39104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.774 [2024-05-14 02:18:37.516275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.774 [2024-05-14 02:18:37.516291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:39136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.774 [2024-05-14 02:18:37.516304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.774 [2024-05-14 02:18:37.516320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.774 [2024-05-14 02:18:37.516334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.774 [2024-05-14 02:18:37.516349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.774 [2024-05-14 02:18:37.516363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.774 [2024-05-14 02:18:37.516379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:39160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.775 [2024-05-14 02:18:37.516392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.775 [2024-05-14 02:18:37.516407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:39168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.775 [2024-05-14 02:18:37.516422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.775 [2024-05-14 02:18:37.516438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:39200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.775 [2024-05-14 02:18:37.516452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.775 [2024-05-14 02:18:37.516473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:39208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.775 [2024-05-14 02:18:37.516488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.775 [2024-05-14 02:18:37.516503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:39656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.775 [2024-05-14 02:18:37.516517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.775 [2024-05-14 02:18:37.516533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:39664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.775 [2024-05-14 02:18:37.516547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.775 [2024-05-14 02:18:37.516562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:39672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.775 [2024-05-14 02:18:37.516576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.775 [2024-05-14 02:18:37.516591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:39680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.775 [2024-05-14 02:18:37.516606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.775 [2024-05-14 02:18:37.516621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.775 [2024-05-14 02:18:37.516635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.775 [2024-05-14 02:18:37.516650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:39696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.775 [2024-05-14 02:18:37.516664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.775 [2024-05-14 02:18:37.516679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:39704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.775 [2024-05-14 02:18:37.516693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.775 [2024-05-14 02:18:37.516708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:39712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.775 [2024-05-14 02:18:37.516721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.775 [2024-05-14 02:18:37.516737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:39720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.775 [2024-05-14 02:18:37.516751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.775 [2024-05-14 02:18:37.516778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:39728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.775 [2024-05-14 02:18:37.516794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.775 [2024-05-14 02:18:37.516810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:39736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.775 [2024-05-14 02:18:37.516824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.775 [2024-05-14 02:18:37.516848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.775 [2024-05-14 02:18:37.516862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.775 [2024-05-14 02:18:37.516884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:39752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.775 [2024-05-14 02:18:37.516899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.775 [2024-05-14 02:18:37.516914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:39760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.775 [2024-05-14 02:18:37.516928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.775 [2024-05-14 02:18:37.516944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:39768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.775 [2024-05-14 02:18:37.516958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.775 [2024-05-14 02:18:37.516973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:39776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.775 [2024-05-14 02:18:37.516987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.775 [2024-05-14 02:18:37.517003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:39784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.775 [2024-05-14 02:18:37.517016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.775 [2024-05-14 02:18:37.517032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:39792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.775 [2024-05-14 02:18:37.517045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.775 [2024-05-14 02:18:37.517061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:39800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.775 [2024-05-14 02:18:37.517074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.775 [2024-05-14 02:18:37.517090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:39808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.775 [2024-05-14 02:18:37.517108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.775 [2024-05-14 02:18:37.517124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:39816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.775 [2024-05-14 02:18:37.517138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.775 [2024-05-14 02:18:37.517153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:39824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.775 [2024-05-14 02:18:37.517167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.775 [2024-05-14 02:18:37.517183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:39832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.775 [2024-05-14 02:18:37.517196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.775 [2024-05-14 02:18:37.517212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:39840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.775 [2024-05-14 02:18:37.517225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.775 [2024-05-14 02:18:37.517241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:39848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.775 [2024-05-14 02:18:37.517290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.775 [2024-05-14 02:18:37.517309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.775 [2024-05-14 02:18:37.517323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.775 [2024-05-14 02:18:37.517338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:39864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.775 [2024-05-14 02:18:37.517352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.775 [2024-05-14 02:18:37.517368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:39232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.775 [2024-05-14 02:18:37.517382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.775 [2024-05-14 02:18:37.517398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:39272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.775 [2024-05-14 02:18:37.517411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.775 [2024-05-14 02:18:37.517427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:39296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.775 [2024-05-14 02:18:37.517441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.775 [2024-05-14 02:18:37.517456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:39304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.775 [2024-05-14 02:18:37.517470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.775 [2024-05-14 02:18:37.517485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:39336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.775 [2024-05-14 02:18:37.517498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.775 [2024-05-14 02:18:37.517514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:39344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.775 [2024-05-14 02:18:37.517528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.775 [2024-05-14 02:18:37.517543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:39352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.775 [2024-05-14 02:18:37.517556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.775 [2024-05-14 02:18:37.517571] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6eb0 is same with the state(5) to be set 00:21:29.775 [2024-05-14 02:18:37.517588] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.775 [2024-05-14 02:18:37.517599] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.775 [2024-05-14 02:18:37.517613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39368 len:8 PRP1 0x0 PRP2 0x0 00:21:29.775 [2024-05-14 02:18:37.517627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.775 [2024-05-14 02:18:37.517674] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xdb6eb0 was disconnected and freed. reset controller. 00:21:29.775 [2024-05-14 02:18:37.517693] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:21:29.775 [2024-05-14 02:18:37.517757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:29.776 [2024-05-14 02:18:37.517794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.776 [2024-05-14 02:18:37.517810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:29.776 [2024-05-14 02:18:37.517823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.776 [2024-05-14 02:18:37.517837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:29.776 [2024-05-14 02:18:37.517851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.776 [2024-05-14 02:18:37.517866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:29.776 [2024-05-14 02:18:37.517891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.776 [2024-05-14 02:18:37.517906] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:29.776 [2024-05-14 02:18:37.520490] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:29.776 [2024-05-14 02:18:37.520530] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd48010 (9): Bad file descriptor 00:21:29.776 [2024-05-14 02:18:37.550985] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:29.776 00:21:29.776 Latency(us) 00:21:29.776 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:29.776 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:29.776 Verification LBA range: start 0x0 length 0x4000 00:21:29.776 NVMe0n1 : 15.01 12421.82 48.52 300.46 0.00 10041.64 573.44 17635.14 00:21:29.776 =================================================================================================================== 00:21:29.776 Total : 12421.82 48.52 300.46 0.00 10041.64 573.44 17635.14 00:21:29.776 Received shutdown signal, test time was about 15.000000 seconds 00:21:29.776 00:21:29.776 Latency(us) 00:21:29.776 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:29.776 =================================================================================================================== 00:21:29.776 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:29.776 02:18:43 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:21:29.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:29.776 02:18:43 -- host/failover.sh@65 -- # count=3 00:21:29.776 02:18:43 -- host/failover.sh@67 -- # (( count != 3 )) 00:21:29.776 02:18:43 -- host/failover.sh@73 -- # bdevperf_pid=82928 00:21:29.776 02:18:43 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:21:29.776 02:18:43 -- host/failover.sh@75 -- # waitforlisten 82928 /var/tmp/bdevperf.sock 00:21:29.776 02:18:43 -- common/autotest_common.sh@819 -- # '[' -z 82928 ']' 00:21:29.776 02:18:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:29.776 02:18:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:29.776 02:18:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:29.776 02:18:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:29.776 02:18:43 -- common/autotest_common.sh@10 -- # set +x 00:21:30.034 02:18:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:30.034 02:18:44 -- common/autotest_common.sh@852 -- # return 0 00:21:30.034 02:18:44 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:30.293 [2024-05-14 02:18:44.728301] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:30.293 02:18:44 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:30.552 [2024-05-14 02:18:45.004598] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:30.552 02:18:45 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:30.810 NVMe0n1 00:21:30.810 02:18:45 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:31.069 00:21:31.328 02:18:45 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:31.587 00:21:31.587 02:18:45 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:31.587 02:18:45 -- host/failover.sh@82 -- # grep -q NVMe0 00:21:31.846 02:18:46 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:32.104 02:18:46 -- host/failover.sh@87 -- # sleep 3 00:21:35.389 02:18:49 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:35.389 02:18:49 -- host/failover.sh@88 -- # grep -q NVMe0 00:21:35.389 02:18:49 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:35.389 02:18:49 -- host/failover.sh@90 -- # run_test_pid=83065 00:21:35.389 02:18:49 -- host/failover.sh@92 -- # wait 83065 00:21:36.323 0 00:21:36.323 02:18:50 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:36.323 [2024-05-14 02:18:43.493888] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:36.323 [2024-05-14 02:18:43.494060] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82928 ] 00:21:36.323 [2024-05-14 02:18:43.626856] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:36.323 [2024-05-14 02:18:43.685509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:36.323 [2024-05-14 02:18:46.454410] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:36.323 [2024-05-14 02:18:46.454556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.323 [2024-05-14 02:18:46.454583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.323 [2024-05-14 02:18:46.454602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.323 [2024-05-14 02:18:46.454616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.323 [2024-05-14 02:18:46.454631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.323 [2024-05-14 02:18:46.454645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.323 [2024-05-14 02:18:46.454659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.323 [2024-05-14 02:18:46.454673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.323 [2024-05-14 02:18:46.454686] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.323 [2024-05-14 02:18:46.454733] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.323 [2024-05-14 02:18:46.454780] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eef010 (9): Bad file descriptor 00:21:36.323 [2024-05-14 02:18:46.460278] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:36.323 Running I/O for 1 seconds... 00:21:36.323 00:21:36.323 Latency(us) 00:21:36.323 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:36.323 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:36.323 Verification LBA range: start 0x0 length 0x4000 00:21:36.323 NVMe0n1 : 1.01 11859.01 46.32 0.00 0.00 10741.77 1459.67 13107.20 00:21:36.323 =================================================================================================================== 00:21:36.323 Total : 11859.01 46.32 0.00 0.00 10741.77 1459.67 13107.20 00:21:36.323 02:18:50 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:36.323 02:18:50 -- host/failover.sh@95 -- # grep -q NVMe0 00:21:36.582 02:18:51 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:36.841 02:18:51 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:36.841 02:18:51 -- host/failover.sh@99 -- # grep -q NVMe0 00:21:37.100 02:18:51 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:37.358 02:18:51 -- host/failover.sh@101 -- # sleep 3 00:21:40.645 02:18:54 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:40.645 02:18:54 -- host/failover.sh@103 -- # grep -q NVMe0 00:21:40.645 02:18:55 -- host/failover.sh@108 -- # killprocess 82928 00:21:40.645 02:18:55 -- common/autotest_common.sh@926 -- # '[' -z 82928 ']' 00:21:40.645 02:18:55 -- common/autotest_common.sh@930 -- # kill -0 82928 00:21:40.645 02:18:55 -- common/autotest_common.sh@931 -- # uname 00:21:40.645 02:18:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:40.645 02:18:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 82928 00:21:40.645 killing process with pid 82928 00:21:40.645 02:18:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:40.645 02:18:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:40.645 02:18:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 82928' 00:21:40.645 02:18:55 -- common/autotest_common.sh@945 -- # kill 82928 00:21:40.645 02:18:55 -- common/autotest_common.sh@950 -- # wait 82928 00:21:40.903 02:18:55 -- host/failover.sh@110 -- # sync 00:21:40.903 02:18:55 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:41.162 02:18:55 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:21:41.162 02:18:55 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:41.162 02:18:55 -- host/failover.sh@116 -- # nvmftestfini 00:21:41.162 02:18:55 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:41.162 02:18:55 -- nvmf/common.sh@116 -- # sync 00:21:41.162 02:18:55 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:41.162 02:18:55 -- nvmf/common.sh@119 -- # set +e 00:21:41.162 02:18:55 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:41.162 02:18:55 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:41.162 rmmod nvme_tcp 00:21:41.162 rmmod nvme_fabrics 00:21:41.162 rmmod nvme_keyring 00:21:41.162 02:18:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:41.162 02:18:55 -- nvmf/common.sh@123 -- # set -e 00:21:41.162 02:18:55 -- nvmf/common.sh@124 -- # return 0 00:21:41.162 02:18:55 -- nvmf/common.sh@477 -- # '[' -n 82554 ']' 00:21:41.162 02:18:55 -- nvmf/common.sh@478 -- # killprocess 82554 00:21:41.162 02:18:55 -- common/autotest_common.sh@926 -- # '[' -z 82554 ']' 00:21:41.162 02:18:55 -- common/autotest_common.sh@930 -- # kill -0 82554 00:21:41.162 02:18:55 -- common/autotest_common.sh@931 -- # uname 00:21:41.162 02:18:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:41.162 02:18:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 82554 00:21:41.162 killing process with pid 82554 00:21:41.162 02:18:55 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:41.162 02:18:55 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:41.162 02:18:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 82554' 00:21:41.162 02:18:55 -- common/autotest_common.sh@945 -- # kill 82554 00:21:41.162 02:18:55 -- common/autotest_common.sh@950 -- # wait 82554 00:21:41.421 02:18:55 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:41.421 02:18:55 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:41.421 02:18:55 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:41.421 02:18:55 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:41.421 02:18:55 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:41.421 02:18:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:41.421 02:18:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:41.421 02:18:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:41.421 02:18:55 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:41.421 00:21:41.421 real 0m32.940s 00:21:41.421 user 2m8.549s 00:21:41.421 sys 0m4.541s 00:21:41.421 02:18:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:41.421 02:18:55 -- common/autotest_common.sh@10 -- # set +x 00:21:41.421 ************************************ 00:21:41.421 END TEST nvmf_failover 00:21:41.421 ************************************ 00:21:41.681 02:18:56 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:41.681 02:18:56 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:41.681 02:18:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:41.681 02:18:56 -- common/autotest_common.sh@10 -- # set +x 00:21:41.681 ************************************ 00:21:41.681 START TEST nvmf_discovery 00:21:41.681 ************************************ 00:21:41.681 02:18:56 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:41.681 * Looking for test storage... 00:21:41.681 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:41.681 02:18:56 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:41.681 02:18:56 -- nvmf/common.sh@7 -- # uname -s 00:21:41.681 02:18:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:41.681 02:18:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:41.681 02:18:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:41.681 02:18:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:41.681 02:18:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:41.681 02:18:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:41.681 02:18:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:41.681 02:18:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:41.681 02:18:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:41.681 02:18:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:41.681 02:18:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:21:41.681 02:18:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:21:41.681 02:18:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:41.681 02:18:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:41.681 02:18:56 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:41.681 02:18:56 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:41.681 02:18:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:41.681 02:18:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:41.681 02:18:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:41.681 02:18:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.681 02:18:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.681 02:18:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.681 02:18:56 -- paths/export.sh@5 -- # export PATH 00:21:41.681 02:18:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.681 02:18:56 -- nvmf/common.sh@46 -- # : 0 00:21:41.681 02:18:56 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:41.681 02:18:56 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:41.681 02:18:56 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:41.681 02:18:56 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:41.681 02:18:56 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:41.681 02:18:56 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:41.681 02:18:56 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:41.681 02:18:56 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:41.681 02:18:56 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:21:41.681 02:18:56 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:21:41.681 02:18:56 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:21:41.681 02:18:56 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:21:41.681 02:18:56 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:21:41.681 02:18:56 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:21:41.681 02:18:56 -- host/discovery.sh@25 -- # nvmftestinit 00:21:41.681 02:18:56 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:41.681 02:18:56 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:41.681 02:18:56 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:41.681 02:18:56 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:41.681 02:18:56 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:41.681 02:18:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:41.681 02:18:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:41.681 02:18:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:41.681 02:18:56 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:41.681 02:18:56 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:41.681 02:18:56 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:41.681 02:18:56 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:41.681 02:18:56 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:41.681 02:18:56 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:41.681 02:18:56 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:41.681 02:18:56 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:41.681 02:18:56 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:41.681 02:18:56 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:41.681 02:18:56 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:41.681 02:18:56 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:41.681 02:18:56 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:41.681 02:18:56 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:41.681 02:18:56 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:41.681 02:18:56 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:41.681 02:18:56 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:41.681 02:18:56 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:41.682 02:18:56 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:41.682 02:18:56 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:41.682 Cannot find device "nvmf_tgt_br" 00:21:41.682 02:18:56 -- nvmf/common.sh@154 -- # true 00:21:41.682 02:18:56 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:41.682 Cannot find device "nvmf_tgt_br2" 00:21:41.682 02:18:56 -- nvmf/common.sh@155 -- # true 00:21:41.682 02:18:56 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:41.682 02:18:56 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:41.682 Cannot find device "nvmf_tgt_br" 00:21:41.682 02:18:56 -- nvmf/common.sh@157 -- # true 00:21:41.682 02:18:56 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:41.682 Cannot find device "nvmf_tgt_br2" 00:21:41.682 02:18:56 -- nvmf/common.sh@158 -- # true 00:21:41.682 02:18:56 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:41.940 02:18:56 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:41.941 02:18:56 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:41.941 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:41.941 02:18:56 -- nvmf/common.sh@161 -- # true 00:21:41.941 02:18:56 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:41.941 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:41.941 02:18:56 -- nvmf/common.sh@162 -- # true 00:21:41.941 02:18:56 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:41.941 02:18:56 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:41.941 02:18:56 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:41.941 02:18:56 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:41.941 02:18:56 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:41.941 02:18:56 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:41.941 02:18:56 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:41.941 02:18:56 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:41.941 02:18:56 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:41.941 02:18:56 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:41.941 02:18:56 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:41.941 02:18:56 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:41.941 02:18:56 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:41.941 02:18:56 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:41.941 02:18:56 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:41.941 02:18:56 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:41.941 02:18:56 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:41.941 02:18:56 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:41.941 02:18:56 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:41.941 02:18:56 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:41.941 02:18:56 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:41.941 02:18:56 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:41.941 02:18:56 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:41.941 02:18:56 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:41.941 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:41.941 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:21:41.941 00:21:41.941 --- 10.0.0.2 ping statistics --- 00:21:41.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:41.941 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:21:41.941 02:18:56 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:41.941 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:41.941 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:21:41.941 00:21:41.941 --- 10.0.0.3 ping statistics --- 00:21:41.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:41.941 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:21:41.941 02:18:56 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:41.941 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:41.941 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:21:41.941 00:21:41.941 --- 10.0.0.1 ping statistics --- 00:21:41.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:41.941 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:21:41.941 02:18:56 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:41.941 02:18:56 -- nvmf/common.sh@421 -- # return 0 00:21:41.941 02:18:56 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:41.941 02:18:56 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:41.941 02:18:56 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:41.941 02:18:56 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:41.941 02:18:56 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:41.941 02:18:56 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:41.941 02:18:56 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:41.941 02:18:56 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:21:41.941 02:18:56 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:41.941 02:18:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:41.941 02:18:56 -- common/autotest_common.sh@10 -- # set +x 00:21:41.941 02:18:56 -- nvmf/common.sh@469 -- # nvmfpid=83363 00:21:41.941 02:18:56 -- nvmf/common.sh@470 -- # waitforlisten 83363 00:21:41.941 02:18:56 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:41.941 02:18:56 -- common/autotest_common.sh@819 -- # '[' -z 83363 ']' 00:21:41.941 02:18:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:41.941 02:18:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:41.941 02:18:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:41.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:41.941 02:18:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:41.941 02:18:56 -- common/autotest_common.sh@10 -- # set +x 00:21:42.200 [2024-05-14 02:18:56.582711] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:42.200 [2024-05-14 02:18:56.582843] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:42.200 [2024-05-14 02:18:56.724837] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:42.460 [2024-05-14 02:18:56.792325] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:42.460 [2024-05-14 02:18:56.792498] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:42.460 [2024-05-14 02:18:56.792512] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:42.460 [2024-05-14 02:18:56.792521] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:42.460 [2024-05-14 02:18:56.792551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:43.393 02:18:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:43.393 02:18:57 -- common/autotest_common.sh@852 -- # return 0 00:21:43.393 02:18:57 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:43.393 02:18:57 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:43.393 02:18:57 -- common/autotest_common.sh@10 -- # set +x 00:21:43.393 02:18:57 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:43.393 02:18:57 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:43.393 02:18:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:43.393 02:18:57 -- common/autotest_common.sh@10 -- # set +x 00:21:43.393 [2024-05-14 02:18:57.663874] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:43.393 02:18:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:43.393 02:18:57 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:21:43.393 02:18:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:43.393 02:18:57 -- common/autotest_common.sh@10 -- # set +x 00:21:43.393 [2024-05-14 02:18:57.671966] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:21:43.393 02:18:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:43.393 02:18:57 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:21:43.393 02:18:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:43.393 02:18:57 -- common/autotest_common.sh@10 -- # set +x 00:21:43.393 null0 00:21:43.393 02:18:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:43.393 02:18:57 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:21:43.393 02:18:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:43.393 02:18:57 -- common/autotest_common.sh@10 -- # set +x 00:21:43.393 null1 00:21:43.393 02:18:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:43.393 02:18:57 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:21:43.393 02:18:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:43.393 02:18:57 -- common/autotest_common.sh@10 -- # set +x 00:21:43.393 02:18:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:43.393 02:18:57 -- host/discovery.sh@45 -- # hostpid=83419 00:21:43.393 02:18:57 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:21:43.393 02:18:57 -- host/discovery.sh@46 -- # waitforlisten 83419 /tmp/host.sock 00:21:43.393 02:18:57 -- common/autotest_common.sh@819 -- # '[' -z 83419 ']' 00:21:43.393 02:18:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:21:43.393 02:18:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:43.393 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:21:43.393 02:18:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:21:43.393 02:18:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:43.393 02:18:57 -- common/autotest_common.sh@10 -- # set +x 00:21:43.393 [2024-05-14 02:18:57.758683] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:43.393 [2024-05-14 02:18:57.758791] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83419 ] 00:21:43.393 [2024-05-14 02:18:57.896189] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.393 [2024-05-14 02:18:57.958120] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:43.393 [2024-05-14 02:18:57.958302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:44.324 02:18:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:44.324 02:18:58 -- common/autotest_common.sh@852 -- # return 0 00:21:44.324 02:18:58 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:44.324 02:18:58 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:21:44.324 02:18:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:44.324 02:18:58 -- common/autotest_common.sh@10 -- # set +x 00:21:44.324 02:18:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:44.324 02:18:58 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:21:44.324 02:18:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:44.324 02:18:58 -- common/autotest_common.sh@10 -- # set +x 00:21:44.324 02:18:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:44.324 02:18:58 -- host/discovery.sh@72 -- # notify_id=0 00:21:44.324 02:18:58 -- host/discovery.sh@78 -- # get_subsystem_names 00:21:44.324 02:18:58 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:44.324 02:18:58 -- host/discovery.sh@59 -- # sort 00:21:44.324 02:18:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:44.324 02:18:58 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:44.324 02:18:58 -- common/autotest_common.sh@10 -- # set +x 00:21:44.324 02:18:58 -- host/discovery.sh@59 -- # xargs 00:21:44.324 02:18:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:44.324 02:18:58 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:21:44.324 02:18:58 -- host/discovery.sh@79 -- # get_bdev_list 00:21:44.324 02:18:58 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:44.324 02:18:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:44.324 02:18:58 -- common/autotest_common.sh@10 -- # set +x 00:21:44.324 02:18:58 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:44.324 02:18:58 -- host/discovery.sh@55 -- # sort 00:21:44.324 02:18:58 -- host/discovery.sh@55 -- # xargs 00:21:44.324 02:18:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:44.582 02:18:58 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:21:44.582 02:18:58 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:21:44.582 02:18:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:44.582 02:18:58 -- common/autotest_common.sh@10 -- # set +x 00:21:44.582 02:18:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:44.582 02:18:58 -- host/discovery.sh@82 -- # get_subsystem_names 00:21:44.582 02:18:58 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:44.582 02:18:58 -- host/discovery.sh@59 -- # sort 00:21:44.582 02:18:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:44.582 02:18:58 -- common/autotest_common.sh@10 -- # set +x 00:21:44.582 02:18:58 -- host/discovery.sh@59 -- # xargs 00:21:44.583 02:18:58 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:44.583 02:18:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:44.583 02:18:58 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:21:44.583 02:18:58 -- host/discovery.sh@83 -- # get_bdev_list 00:21:44.583 02:18:58 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:44.583 02:18:58 -- host/discovery.sh@55 -- # sort 00:21:44.583 02:18:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:44.583 02:18:58 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:44.583 02:18:58 -- common/autotest_common.sh@10 -- # set +x 00:21:44.583 02:18:58 -- host/discovery.sh@55 -- # xargs 00:21:44.583 02:18:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:44.583 02:18:59 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:21:44.583 02:18:59 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:21:44.583 02:18:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:44.583 02:18:59 -- common/autotest_common.sh@10 -- # set +x 00:21:44.583 02:18:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:44.583 02:18:59 -- host/discovery.sh@86 -- # get_subsystem_names 00:21:44.583 02:18:59 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:44.583 02:18:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:44.583 02:18:59 -- common/autotest_common.sh@10 -- # set +x 00:21:44.583 02:18:59 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:44.583 02:18:59 -- host/discovery.sh@59 -- # sort 00:21:44.583 02:18:59 -- host/discovery.sh@59 -- # xargs 00:21:44.583 02:18:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:44.583 02:18:59 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:21:44.583 02:18:59 -- host/discovery.sh@87 -- # get_bdev_list 00:21:44.583 02:18:59 -- host/discovery.sh@55 -- # sort 00:21:44.583 02:18:59 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:44.583 02:18:59 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:44.583 02:18:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:44.583 02:18:59 -- common/autotest_common.sh@10 -- # set +x 00:21:44.583 02:18:59 -- host/discovery.sh@55 -- # xargs 00:21:44.583 02:18:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:44.583 02:18:59 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:21:44.583 02:18:59 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:44.583 02:18:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:44.583 02:18:59 -- common/autotest_common.sh@10 -- # set +x 00:21:44.583 [2024-05-14 02:18:59.160932] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:44.583 02:18:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:44.583 02:18:59 -- host/discovery.sh@92 -- # get_subsystem_names 00:21:44.583 02:18:59 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:44.583 02:18:59 -- host/discovery.sh@59 -- # sort 00:21:44.583 02:18:59 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:44.583 02:18:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:44.583 02:18:59 -- host/discovery.sh@59 -- # xargs 00:21:44.583 02:18:59 -- common/autotest_common.sh@10 -- # set +x 00:21:44.842 02:18:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:44.842 02:18:59 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:21:44.842 02:18:59 -- host/discovery.sh@93 -- # get_bdev_list 00:21:44.842 02:18:59 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:44.842 02:18:59 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:44.842 02:18:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:44.842 02:18:59 -- common/autotest_common.sh@10 -- # set +x 00:21:44.842 02:18:59 -- host/discovery.sh@55 -- # sort 00:21:44.842 02:18:59 -- host/discovery.sh@55 -- # xargs 00:21:44.842 02:18:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:44.842 02:18:59 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:21:44.842 02:18:59 -- host/discovery.sh@94 -- # get_notification_count 00:21:44.842 02:18:59 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:44.842 02:18:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:44.842 02:18:59 -- common/autotest_common.sh@10 -- # set +x 00:21:44.842 02:18:59 -- host/discovery.sh@74 -- # jq '. | length' 00:21:44.842 02:18:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:44.842 02:18:59 -- host/discovery.sh@74 -- # notification_count=0 00:21:44.842 02:18:59 -- host/discovery.sh@75 -- # notify_id=0 00:21:44.842 02:18:59 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:21:44.842 02:18:59 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:21:44.842 02:18:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:44.842 02:18:59 -- common/autotest_common.sh@10 -- # set +x 00:21:44.842 02:18:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:44.842 02:18:59 -- host/discovery.sh@100 -- # sleep 1 00:21:45.410 [2024-05-14 02:18:59.807347] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:45.410 [2024-05-14 02:18:59.807378] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:45.410 [2024-05-14 02:18:59.807398] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:45.410 [2024-05-14 02:18:59.893569] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:21:45.410 [2024-05-14 02:18:59.949787] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:45.410 [2024-05-14 02:18:59.949853] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:45.982 02:19:00 -- host/discovery.sh@101 -- # get_subsystem_names 00:21:45.982 02:19:00 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:45.982 02:19:00 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:45.982 02:19:00 -- host/discovery.sh@59 -- # sort 00:21:45.982 02:19:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:45.982 02:19:00 -- host/discovery.sh@59 -- # xargs 00:21:45.982 02:19:00 -- common/autotest_common.sh@10 -- # set +x 00:21:45.982 02:19:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:45.982 02:19:00 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.982 02:19:00 -- host/discovery.sh@102 -- # get_bdev_list 00:21:45.982 02:19:00 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:45.982 02:19:00 -- host/discovery.sh@55 -- # sort 00:21:45.982 02:19:00 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:45.982 02:19:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:45.982 02:19:00 -- common/autotest_common.sh@10 -- # set +x 00:21:45.982 02:19:00 -- host/discovery.sh@55 -- # xargs 00:21:45.982 02:19:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:45.982 02:19:00 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:21:45.982 02:19:00 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:21:45.982 02:19:00 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:45.982 02:19:00 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:45.982 02:19:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:45.982 02:19:00 -- host/discovery.sh@63 -- # sort -n 00:21:45.982 02:19:00 -- common/autotest_common.sh@10 -- # set +x 00:21:45.982 02:19:00 -- host/discovery.sh@63 -- # xargs 00:21:45.982 02:19:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:45.982 02:19:00 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:21:45.982 02:19:00 -- host/discovery.sh@104 -- # get_notification_count 00:21:45.982 02:19:00 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:45.982 02:19:00 -- host/discovery.sh@74 -- # jq '. | length' 00:21:45.982 02:19:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:45.982 02:19:00 -- common/autotest_common.sh@10 -- # set +x 00:21:45.982 02:19:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:45.982 02:19:00 -- host/discovery.sh@74 -- # notification_count=1 00:21:45.982 02:19:00 -- host/discovery.sh@75 -- # notify_id=1 00:21:45.982 02:19:00 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:21:45.982 02:19:00 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:21:45.982 02:19:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:45.982 02:19:00 -- common/autotest_common.sh@10 -- # set +x 00:21:45.982 02:19:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:45.982 02:19:00 -- host/discovery.sh@109 -- # sleep 1 00:21:47.358 02:19:01 -- host/discovery.sh@110 -- # get_bdev_list 00:21:47.358 02:19:01 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:47.358 02:19:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:47.358 02:19:01 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:47.358 02:19:01 -- common/autotest_common.sh@10 -- # set +x 00:21:47.358 02:19:01 -- host/discovery.sh@55 -- # sort 00:21:47.358 02:19:01 -- host/discovery.sh@55 -- # xargs 00:21:47.358 02:19:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:47.358 02:19:01 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:47.358 02:19:01 -- host/discovery.sh@111 -- # get_notification_count 00:21:47.358 02:19:01 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:21:47.358 02:19:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:47.358 02:19:01 -- host/discovery.sh@74 -- # jq '. | length' 00:21:47.358 02:19:01 -- common/autotest_common.sh@10 -- # set +x 00:21:47.358 02:19:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:47.358 02:19:01 -- host/discovery.sh@74 -- # notification_count=1 00:21:47.358 02:19:01 -- host/discovery.sh@75 -- # notify_id=2 00:21:47.358 02:19:01 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:21:47.358 02:19:01 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:21:47.358 02:19:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:47.358 02:19:01 -- common/autotest_common.sh@10 -- # set +x 00:21:47.358 [2024-05-14 02:19:01.678784] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:47.358 [2024-05-14 02:19:01.679809] bdev_nvme.c:6735:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:47.358 [2024-05-14 02:19:01.679857] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:47.358 02:19:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:47.358 02:19:01 -- host/discovery.sh@117 -- # sleep 1 00:21:47.358 [2024-05-14 02:19:01.765956] bdev_nvme.c:6677:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:21:47.358 [2024-05-14 02:19:01.830294] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:47.358 [2024-05-14 02:19:01.830344] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:47.358 [2024-05-14 02:19:01.830366] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:48.294 02:19:02 -- host/discovery.sh@118 -- # get_subsystem_names 00:21:48.294 02:19:02 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:48.294 02:19:02 -- host/discovery.sh@59 -- # sort 00:21:48.294 02:19:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.294 02:19:02 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:48.294 02:19:02 -- common/autotest_common.sh@10 -- # set +x 00:21:48.294 02:19:02 -- host/discovery.sh@59 -- # xargs 00:21:48.294 02:19:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.294 02:19:02 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:48.294 02:19:02 -- host/discovery.sh@119 -- # get_bdev_list 00:21:48.294 02:19:02 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:48.294 02:19:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.294 02:19:02 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:48.294 02:19:02 -- host/discovery.sh@55 -- # sort 00:21:48.294 02:19:02 -- common/autotest_common.sh@10 -- # set +x 00:21:48.294 02:19:02 -- host/discovery.sh@55 -- # xargs 00:21:48.294 02:19:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.294 02:19:02 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:48.295 02:19:02 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:21:48.295 02:19:02 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:48.295 02:19:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.295 02:19:02 -- host/discovery.sh@63 -- # sort -n 00:21:48.295 02:19:02 -- common/autotest_common.sh@10 -- # set +x 00:21:48.295 02:19:02 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:48.295 02:19:02 -- host/discovery.sh@63 -- # xargs 00:21:48.295 02:19:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.295 02:19:02 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:21:48.295 02:19:02 -- host/discovery.sh@121 -- # get_notification_count 00:21:48.295 02:19:02 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:48.295 02:19:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.295 02:19:02 -- common/autotest_common.sh@10 -- # set +x 00:21:48.295 02:19:02 -- host/discovery.sh@74 -- # jq '. | length' 00:21:48.295 02:19:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.554 02:19:02 -- host/discovery.sh@74 -- # notification_count=0 00:21:48.554 02:19:02 -- host/discovery.sh@75 -- # notify_id=2 00:21:48.554 02:19:02 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:21:48.554 02:19:02 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:48.554 02:19:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.554 02:19:02 -- common/autotest_common.sh@10 -- # set +x 00:21:48.554 [2024-05-14 02:19:02.916624] bdev_nvme.c:6735:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:48.554 [2024-05-14 02:19:02.916861] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:48.554 [2024-05-14 02:19:02.917820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:48.554 [2024-05-14 02:19:02.917986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.554 [2024-05-14 02:19:02.918006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:48.554 [2024-05-14 02:19:02.918016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.554 [2024-05-14 02:19:02.918026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:48.554 [2024-05-14 02:19:02.918035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.554 [2024-05-14 02:19:02.918046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:48.554 [2024-05-14 02:19:02.918055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.554 [2024-05-14 02:19:02.918065] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171cbd0 is same with the state(5) to be set 00:21:48.554 02:19:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.554 02:19:02 -- host/discovery.sh@127 -- # sleep 1 00:21:48.554 [2024-05-14 02:19:02.927775] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x171cbd0 (9): Bad file descriptor 00:21:48.554 [2024-05-14 02:19:02.937791] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:48.554 [2024-05-14 02:19:02.938125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.554 [2024-05-14 02:19:02.938183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.554 [2024-05-14 02:19:02.938201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x171cbd0 with addr=10.0.0.2, port=4420 00:21:48.554 [2024-05-14 02:19:02.938212] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171cbd0 is same with the state(5) to be set 00:21:48.554 [2024-05-14 02:19:02.938230] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x171cbd0 (9): Bad file descriptor 00:21:48.554 [2024-05-14 02:19:02.938247] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:48.554 [2024-05-14 02:19:02.938256] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:48.554 [2024-05-14 02:19:02.938267] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:48.554 [2024-05-14 02:19:02.938284] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.554 [2024-05-14 02:19:02.948047] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:48.554 [2024-05-14 02:19:02.948130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.554 [2024-05-14 02:19:02.948206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.554 [2024-05-14 02:19:02.948221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x171cbd0 with addr=10.0.0.2, port=4420 00:21:48.554 [2024-05-14 02:19:02.948231] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171cbd0 is same with the state(5) to be set 00:21:48.554 [2024-05-14 02:19:02.948246] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x171cbd0 (9): Bad file descriptor 00:21:48.554 [2024-05-14 02:19:02.948259] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:48.554 [2024-05-14 02:19:02.948267] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:48.554 [2024-05-14 02:19:02.948276] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:48.554 [2024-05-14 02:19:02.948290] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.554 [2024-05-14 02:19:02.958099] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:48.554 [2024-05-14 02:19:02.958179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.554 [2024-05-14 02:19:02.958225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.554 [2024-05-14 02:19:02.958241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x171cbd0 with addr=10.0.0.2, port=4420 00:21:48.554 [2024-05-14 02:19:02.958251] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171cbd0 is same with the state(5) to be set 00:21:48.554 [2024-05-14 02:19:02.958266] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x171cbd0 (9): Bad file descriptor 00:21:48.554 [2024-05-14 02:19:02.958280] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:48.554 [2024-05-14 02:19:02.958289] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:48.554 [2024-05-14 02:19:02.958299] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:48.554 [2024-05-14 02:19:02.958347] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.554 [2024-05-14 02:19:02.968152] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:48.554 [2024-05-14 02:19:02.968282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.554 [2024-05-14 02:19:02.968346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.554 [2024-05-14 02:19:02.968392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x171cbd0 with addr=10.0.0.2, port=4420 00:21:48.554 [2024-05-14 02:19:02.968418] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171cbd0 is same with the state(5) to be set 00:21:48.554 [2024-05-14 02:19:02.968464] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x171cbd0 (9): Bad file descriptor 00:21:48.554 [2024-05-14 02:19:02.968478] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:48.554 [2024-05-14 02:19:02.968487] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:48.555 [2024-05-14 02:19:02.968512] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:48.555 [2024-05-14 02:19:02.968527] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.555 [2024-05-14 02:19:02.978237] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:48.555 [2024-05-14 02:19:02.978359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.555 [2024-05-14 02:19:02.978404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.555 [2024-05-14 02:19:02.978419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x171cbd0 with addr=10.0.0.2, port=4420 00:21:48.555 [2024-05-14 02:19:02.978429] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171cbd0 is same with the state(5) to be set 00:21:48.555 [2024-05-14 02:19:02.978459] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x171cbd0 (9): Bad file descriptor 00:21:48.555 [2024-05-14 02:19:02.978488] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:48.555 [2024-05-14 02:19:02.978496] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:48.555 [2024-05-14 02:19:02.978505] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:48.555 [2024-05-14 02:19:02.978548] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.555 [2024-05-14 02:19:02.988315] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:48.555 [2024-05-14 02:19:02.988394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.555 [2024-05-14 02:19:02.988454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.555 [2024-05-14 02:19:02.988469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x171cbd0 with addr=10.0.0.2, port=4420 00:21:48.555 [2024-05-14 02:19:02.988495] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171cbd0 is same with the state(5) to be set 00:21:48.555 [2024-05-14 02:19:02.988525] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x171cbd0 (9): Bad file descriptor 00:21:48.555 [2024-05-14 02:19:02.988553] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:48.555 [2024-05-14 02:19:02.988561] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:48.555 [2024-05-14 02:19:02.988569] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:48.555 [2024-05-14 02:19:02.988583] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.555 [2024-05-14 02:19:02.998365] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:48.555 [2024-05-14 02:19:02.998477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.555 [2024-05-14 02:19:02.998521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.555 [2024-05-14 02:19:02.998551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x171cbd0 with addr=10.0.0.2, port=4420 00:21:48.555 [2024-05-14 02:19:02.998561] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171cbd0 is same with the state(5) to be set 00:21:48.555 [2024-05-14 02:19:02.998576] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x171cbd0 (9): Bad file descriptor 00:21:48.555 [2024-05-14 02:19:02.998589] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:48.555 [2024-05-14 02:19:02.998612] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:48.555 [2024-05-14 02:19:02.998638] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:48.555 [2024-05-14 02:19:02.998652] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.555 [2024-05-14 02:19:03.002826] bdev_nvme.c:6540:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:21:48.555 [2024-05-14 02:19:03.002854] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:49.501 02:19:03 -- host/discovery.sh@128 -- # get_subsystem_names 00:21:49.501 02:19:03 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:49.501 02:19:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:49.501 02:19:03 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:49.501 02:19:03 -- common/autotest_common.sh@10 -- # set +x 00:21:49.501 02:19:03 -- host/discovery.sh@59 -- # sort 00:21:49.501 02:19:03 -- host/discovery.sh@59 -- # xargs 00:21:49.501 02:19:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:49.501 02:19:03 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.501 02:19:03 -- host/discovery.sh@129 -- # get_bdev_list 00:21:49.501 02:19:03 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:49.501 02:19:03 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:49.501 02:19:03 -- host/discovery.sh@55 -- # sort 00:21:49.501 02:19:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:49.501 02:19:03 -- host/discovery.sh@55 -- # xargs 00:21:49.501 02:19:03 -- common/autotest_common.sh@10 -- # set +x 00:21:49.501 02:19:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:49.501 02:19:04 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:49.501 02:19:04 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:21:49.501 02:19:04 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:49.501 02:19:04 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:49.501 02:19:04 -- host/discovery.sh@63 -- # sort -n 00:21:49.501 02:19:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:49.501 02:19:04 -- common/autotest_common.sh@10 -- # set +x 00:21:49.501 02:19:04 -- host/discovery.sh@63 -- # xargs 00:21:49.501 02:19:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:49.760 02:19:04 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:21:49.760 02:19:04 -- host/discovery.sh@131 -- # get_notification_count 00:21:49.760 02:19:04 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:49.760 02:19:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:49.760 02:19:04 -- common/autotest_common.sh@10 -- # set +x 00:21:49.760 02:19:04 -- host/discovery.sh@74 -- # jq '. | length' 00:21:49.760 02:19:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:49.760 02:19:04 -- host/discovery.sh@74 -- # notification_count=0 00:21:49.760 02:19:04 -- host/discovery.sh@75 -- # notify_id=2 00:21:49.760 02:19:04 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:21:49.760 02:19:04 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:21:49.760 02:19:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:49.760 02:19:04 -- common/autotest_common.sh@10 -- # set +x 00:21:49.760 02:19:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:49.760 02:19:04 -- host/discovery.sh@135 -- # sleep 1 00:21:50.700 02:19:05 -- host/discovery.sh@136 -- # get_subsystem_names 00:21:50.700 02:19:05 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:50.700 02:19:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:50.701 02:19:05 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:50.701 02:19:05 -- host/discovery.sh@59 -- # sort 00:21:50.701 02:19:05 -- host/discovery.sh@59 -- # xargs 00:21:50.701 02:19:05 -- common/autotest_common.sh@10 -- # set +x 00:21:50.701 02:19:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:50.701 02:19:05 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:21:50.701 02:19:05 -- host/discovery.sh@137 -- # get_bdev_list 00:21:50.701 02:19:05 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:50.701 02:19:05 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:50.701 02:19:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:50.701 02:19:05 -- common/autotest_common.sh@10 -- # set +x 00:21:50.701 02:19:05 -- host/discovery.sh@55 -- # sort 00:21:50.701 02:19:05 -- host/discovery.sh@55 -- # xargs 00:21:50.701 02:19:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:50.701 02:19:05 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:21:50.701 02:19:05 -- host/discovery.sh@138 -- # get_notification_count 00:21:50.701 02:19:05 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:50.701 02:19:05 -- host/discovery.sh@74 -- # jq '. | length' 00:21:50.701 02:19:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:50.701 02:19:05 -- common/autotest_common.sh@10 -- # set +x 00:21:50.961 02:19:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:50.961 02:19:05 -- host/discovery.sh@74 -- # notification_count=2 00:21:50.961 02:19:05 -- host/discovery.sh@75 -- # notify_id=4 00:21:50.961 02:19:05 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:21:50.961 02:19:05 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:50.961 02:19:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:50.961 02:19:05 -- common/autotest_common.sh@10 -- # set +x 00:21:51.895 [2024-05-14 02:19:06.359241] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:51.895 [2024-05-14 02:19:06.359404] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:51.895 [2024-05-14 02:19:06.359473] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:51.895 [2024-05-14 02:19:06.445484] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:21:52.154 [2024-05-14 02:19:06.505090] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:52.154 [2024-05-14 02:19:06.505144] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:52.154 02:19:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:52.154 02:19:06 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:52.154 02:19:06 -- common/autotest_common.sh@640 -- # local es=0 00:21:52.154 02:19:06 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:52.154 02:19:06 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:21:52.154 02:19:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:52.154 02:19:06 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:21:52.154 02:19:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:52.154 02:19:06 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:52.154 02:19:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:52.154 02:19:06 -- common/autotest_common.sh@10 -- # set +x 00:21:52.154 2024/05/14 02:19:06 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:21:52.154 request: 00:21:52.154 { 00:21:52.154 "method": "bdev_nvme_start_discovery", 00:21:52.154 "params": { 00:21:52.154 "name": "nvme", 00:21:52.154 "trtype": "tcp", 00:21:52.154 "traddr": "10.0.0.2", 00:21:52.154 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:52.154 "adrfam": "ipv4", 00:21:52.154 "trsvcid": "8009", 00:21:52.154 "wait_for_attach": true 00:21:52.154 } 00:21:52.154 } 00:21:52.154 Got JSON-RPC error response 00:21:52.154 GoRPCClient: error on JSON-RPC call 00:21:52.154 02:19:06 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:21:52.154 02:19:06 -- common/autotest_common.sh@643 -- # es=1 00:21:52.154 02:19:06 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:52.154 02:19:06 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:52.154 02:19:06 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:52.154 02:19:06 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:21:52.154 02:19:06 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:52.154 02:19:06 -- host/discovery.sh@67 -- # sort 00:21:52.154 02:19:06 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:52.154 02:19:06 -- host/discovery.sh@67 -- # xargs 00:21:52.154 02:19:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:52.154 02:19:06 -- common/autotest_common.sh@10 -- # set +x 00:21:52.154 02:19:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:52.154 02:19:06 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:21:52.154 02:19:06 -- host/discovery.sh@147 -- # get_bdev_list 00:21:52.154 02:19:06 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:52.154 02:19:06 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:52.154 02:19:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:52.154 02:19:06 -- host/discovery.sh@55 -- # sort 00:21:52.154 02:19:06 -- common/autotest_common.sh@10 -- # set +x 00:21:52.154 02:19:06 -- host/discovery.sh@55 -- # xargs 00:21:52.154 02:19:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:52.154 02:19:06 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:52.154 02:19:06 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:52.154 02:19:06 -- common/autotest_common.sh@640 -- # local es=0 00:21:52.154 02:19:06 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:52.154 02:19:06 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:21:52.154 02:19:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:52.154 02:19:06 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:21:52.154 02:19:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:52.154 02:19:06 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:52.154 02:19:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:52.154 02:19:06 -- common/autotest_common.sh@10 -- # set +x 00:21:52.154 2024/05/14 02:19:06 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:21:52.154 request: 00:21:52.154 { 00:21:52.154 "method": "bdev_nvme_start_discovery", 00:21:52.154 "params": { 00:21:52.154 "name": "nvme_second", 00:21:52.154 "trtype": "tcp", 00:21:52.154 "traddr": "10.0.0.2", 00:21:52.154 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:52.154 "adrfam": "ipv4", 00:21:52.154 "trsvcid": "8009", 00:21:52.154 "wait_for_attach": true 00:21:52.154 } 00:21:52.154 } 00:21:52.154 Got JSON-RPC error response 00:21:52.154 GoRPCClient: error on JSON-RPC call 00:21:52.154 02:19:06 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:21:52.154 02:19:06 -- common/autotest_common.sh@643 -- # es=1 00:21:52.154 02:19:06 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:52.155 02:19:06 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:52.155 02:19:06 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:52.155 02:19:06 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:21:52.155 02:19:06 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:52.155 02:19:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:52.155 02:19:06 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:52.155 02:19:06 -- common/autotest_common.sh@10 -- # set +x 00:21:52.155 02:19:06 -- host/discovery.sh@67 -- # xargs 00:21:52.155 02:19:06 -- host/discovery.sh@67 -- # sort 00:21:52.155 02:19:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:52.155 02:19:06 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:21:52.155 02:19:06 -- host/discovery.sh@153 -- # get_bdev_list 00:21:52.155 02:19:06 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:52.155 02:19:06 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:52.155 02:19:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:52.155 02:19:06 -- host/discovery.sh@55 -- # xargs 00:21:52.155 02:19:06 -- host/discovery.sh@55 -- # sort 00:21:52.155 02:19:06 -- common/autotest_common.sh@10 -- # set +x 00:21:52.413 02:19:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:52.413 02:19:06 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:52.413 02:19:06 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:52.413 02:19:06 -- common/autotest_common.sh@640 -- # local es=0 00:21:52.413 02:19:06 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:52.413 02:19:06 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:21:52.413 02:19:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:52.413 02:19:06 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:21:52.413 02:19:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:52.413 02:19:06 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:52.413 02:19:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:52.413 02:19:06 -- common/autotest_common.sh@10 -- # set +x 00:21:53.395 [2024-05-14 02:19:07.774694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:53.395 [2024-05-14 02:19:07.774801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:53.395 [2024-05-14 02:19:07.774823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1774410 with addr=10.0.0.2, port=8010 00:21:53.395 [2024-05-14 02:19:07.774842] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:53.395 [2024-05-14 02:19:07.774852] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:53.395 [2024-05-14 02:19:07.774862] bdev_nvme.c:6815:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:54.332 [2024-05-14 02:19:08.774667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.332 [2024-05-14 02:19:08.774792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.332 [2024-05-14 02:19:08.774812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1774410 with addr=10.0.0.2, port=8010 00:21:54.332 [2024-05-14 02:19:08.774830] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:54.332 [2024-05-14 02:19:08.774840] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:54.332 [2024-05-14 02:19:08.774850] bdev_nvme.c:6815:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:55.266 [2024-05-14 02:19:09.774543] bdev_nvme.c:6796:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:21:55.266 request: 00:21:55.266 { 00:21:55.266 "method": "bdev_nvme_start_discovery", 00:21:55.266 "params": { 00:21:55.266 "name": "nvme_second", 00:21:55.266 "trtype": "tcp", 00:21:55.266 "traddr": "10.0.0.2", 00:21:55.266 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:55.266 "adrfam": "ipv4", 00:21:55.266 "trsvcid": "8010", 00:21:55.266 "attach_timeout_ms": 3000 00:21:55.266 } 00:21:55.266 } 00:21:55.266 Got JSON-RPC error response 00:21:55.266 GoRPCClient: error on JSON-RPC call 00:21:55.266 2024/05/14 02:19:09 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:21:55.266 02:19:09 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:21:55.266 02:19:09 -- common/autotest_common.sh@643 -- # es=1 00:21:55.266 02:19:09 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:55.266 02:19:09 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:55.266 02:19:09 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:55.266 02:19:09 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:21:55.266 02:19:09 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:55.266 02:19:09 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:55.266 02:19:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:55.266 02:19:09 -- common/autotest_common.sh@10 -- # set +x 00:21:55.266 02:19:09 -- host/discovery.sh@67 -- # sort 00:21:55.266 02:19:09 -- host/discovery.sh@67 -- # xargs 00:21:55.266 02:19:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:55.266 02:19:09 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:21:55.266 02:19:09 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:21:55.266 02:19:09 -- host/discovery.sh@162 -- # kill 83419 00:21:55.266 02:19:09 -- host/discovery.sh@163 -- # nvmftestfini 00:21:55.266 02:19:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:55.266 02:19:09 -- nvmf/common.sh@116 -- # sync 00:21:55.525 02:19:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:55.525 02:19:09 -- nvmf/common.sh@119 -- # set +e 00:21:55.525 02:19:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:55.525 02:19:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:55.525 rmmod nvme_tcp 00:21:55.525 rmmod nvme_fabrics 00:21:55.525 rmmod nvme_keyring 00:21:55.525 02:19:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:55.525 02:19:09 -- nvmf/common.sh@123 -- # set -e 00:21:55.525 02:19:09 -- nvmf/common.sh@124 -- # return 0 00:21:55.525 02:19:09 -- nvmf/common.sh@477 -- # '[' -n 83363 ']' 00:21:55.525 02:19:09 -- nvmf/common.sh@478 -- # killprocess 83363 00:21:55.525 02:19:09 -- common/autotest_common.sh@926 -- # '[' -z 83363 ']' 00:21:55.525 02:19:09 -- common/autotest_common.sh@930 -- # kill -0 83363 00:21:55.525 02:19:09 -- common/autotest_common.sh@931 -- # uname 00:21:55.525 02:19:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:55.525 02:19:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 83363 00:21:55.525 02:19:09 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:55.525 killing process with pid 83363 00:21:55.525 02:19:09 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:55.525 02:19:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 83363' 00:21:55.525 02:19:09 -- common/autotest_common.sh@945 -- # kill 83363 00:21:55.525 02:19:09 -- common/autotest_common.sh@950 -- # wait 83363 00:21:55.784 02:19:10 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:55.784 02:19:10 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:55.784 02:19:10 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:55.784 02:19:10 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:55.784 02:19:10 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:55.784 02:19:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:55.784 02:19:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:55.784 02:19:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:55.784 02:19:10 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:55.784 00:21:55.784 real 0m14.176s 00:21:55.784 user 0m27.833s 00:21:55.784 sys 0m1.699s 00:21:55.784 02:19:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:55.784 02:19:10 -- common/autotest_common.sh@10 -- # set +x 00:21:55.784 ************************************ 00:21:55.784 END TEST nvmf_discovery 00:21:55.784 ************************************ 00:21:55.784 02:19:10 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:21:55.784 02:19:10 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:55.784 02:19:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:55.784 02:19:10 -- common/autotest_common.sh@10 -- # set +x 00:21:55.784 ************************************ 00:21:55.784 START TEST nvmf_discovery_remove_ifc 00:21:55.784 ************************************ 00:21:55.784 02:19:10 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:21:55.784 * Looking for test storage... 00:21:55.784 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:55.784 02:19:10 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:55.784 02:19:10 -- nvmf/common.sh@7 -- # uname -s 00:21:55.784 02:19:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:55.784 02:19:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:55.784 02:19:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:55.784 02:19:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:55.784 02:19:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:55.784 02:19:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:55.784 02:19:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:55.784 02:19:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:55.784 02:19:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:55.784 02:19:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:55.784 02:19:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:21:55.784 02:19:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:21:55.784 02:19:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:55.784 02:19:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:55.784 02:19:10 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:55.784 02:19:10 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:56.044 02:19:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:56.044 02:19:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:56.044 02:19:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:56.044 02:19:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.044 02:19:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.044 02:19:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.044 02:19:10 -- paths/export.sh@5 -- # export PATH 00:21:56.044 02:19:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.044 02:19:10 -- nvmf/common.sh@46 -- # : 0 00:21:56.044 02:19:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:56.044 02:19:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:56.044 02:19:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:56.044 02:19:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:56.044 02:19:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:56.044 02:19:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:56.044 02:19:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:56.044 02:19:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:56.044 02:19:10 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:21:56.044 02:19:10 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:21:56.044 02:19:10 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:21:56.044 02:19:10 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:21:56.044 02:19:10 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:21:56.044 02:19:10 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:21:56.044 02:19:10 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:21:56.044 02:19:10 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:56.044 02:19:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:56.044 02:19:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:56.044 02:19:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:56.044 02:19:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:56.044 02:19:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:56.044 02:19:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:56.044 02:19:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:56.044 02:19:10 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:56.044 02:19:10 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:56.044 02:19:10 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:56.044 02:19:10 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:56.044 02:19:10 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:56.044 02:19:10 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:56.044 02:19:10 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:56.044 02:19:10 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:56.044 02:19:10 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:56.044 02:19:10 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:56.044 02:19:10 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:56.044 02:19:10 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:56.044 02:19:10 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:56.044 02:19:10 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:56.044 02:19:10 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:56.044 02:19:10 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:56.044 02:19:10 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:56.044 02:19:10 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:56.044 02:19:10 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:56.044 02:19:10 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:56.044 Cannot find device "nvmf_tgt_br" 00:21:56.044 02:19:10 -- nvmf/common.sh@154 -- # true 00:21:56.044 02:19:10 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:56.044 Cannot find device "nvmf_tgt_br2" 00:21:56.044 02:19:10 -- nvmf/common.sh@155 -- # true 00:21:56.044 02:19:10 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:56.044 02:19:10 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:56.044 Cannot find device "nvmf_tgt_br" 00:21:56.044 02:19:10 -- nvmf/common.sh@157 -- # true 00:21:56.044 02:19:10 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:56.044 Cannot find device "nvmf_tgt_br2" 00:21:56.044 02:19:10 -- nvmf/common.sh@158 -- # true 00:21:56.044 02:19:10 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:56.044 02:19:10 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:56.044 02:19:10 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:56.044 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:56.044 02:19:10 -- nvmf/common.sh@161 -- # true 00:21:56.044 02:19:10 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:56.044 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:56.044 02:19:10 -- nvmf/common.sh@162 -- # true 00:21:56.044 02:19:10 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:56.044 02:19:10 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:56.044 02:19:10 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:56.044 02:19:10 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:56.044 02:19:10 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:56.044 02:19:10 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:56.044 02:19:10 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:56.045 02:19:10 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:56.045 02:19:10 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:56.045 02:19:10 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:56.045 02:19:10 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:56.045 02:19:10 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:56.045 02:19:10 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:56.304 02:19:10 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:56.304 02:19:10 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:56.304 02:19:10 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:56.304 02:19:10 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:56.304 02:19:10 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:56.304 02:19:10 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:56.304 02:19:10 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:56.304 02:19:10 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:56.304 02:19:10 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:56.304 02:19:10 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:56.304 02:19:10 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:56.304 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:56.304 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.102 ms 00:21:56.304 00:21:56.304 --- 10.0.0.2 ping statistics --- 00:21:56.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:56.304 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:21:56.304 02:19:10 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:56.304 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:56.304 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:21:56.304 00:21:56.304 --- 10.0.0.3 ping statistics --- 00:21:56.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:56.304 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:21:56.304 02:19:10 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:56.304 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:56.304 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:21:56.304 00:21:56.304 --- 10.0.0.1 ping statistics --- 00:21:56.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:56.304 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:21:56.304 02:19:10 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:56.304 02:19:10 -- nvmf/common.sh@421 -- # return 0 00:21:56.304 02:19:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:56.304 02:19:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:56.304 02:19:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:56.304 02:19:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:56.304 02:19:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:56.304 02:19:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:56.304 02:19:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:56.304 02:19:10 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:21:56.304 02:19:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:56.304 02:19:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:56.304 02:19:10 -- common/autotest_common.sh@10 -- # set +x 00:21:56.304 02:19:10 -- nvmf/common.sh@469 -- # nvmfpid=83918 00:21:56.304 02:19:10 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:56.304 02:19:10 -- nvmf/common.sh@470 -- # waitforlisten 83918 00:21:56.304 02:19:10 -- common/autotest_common.sh@819 -- # '[' -z 83918 ']' 00:21:56.304 02:19:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:56.304 02:19:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:56.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:56.304 02:19:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:56.304 02:19:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:56.304 02:19:10 -- common/autotest_common.sh@10 -- # set +x 00:21:56.304 [2024-05-14 02:19:10.801508] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:56.304 [2024-05-14 02:19:10.801581] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:56.563 [2024-05-14 02:19:10.940919] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:56.563 [2024-05-14 02:19:11.016492] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:56.563 [2024-05-14 02:19:11.016672] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:56.563 [2024-05-14 02:19:11.016687] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:56.563 [2024-05-14 02:19:11.016699] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:56.563 [2024-05-14 02:19:11.016746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:57.501 02:19:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:57.501 02:19:11 -- common/autotest_common.sh@852 -- # return 0 00:21:57.501 02:19:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:57.501 02:19:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:57.501 02:19:11 -- common/autotest_common.sh@10 -- # set +x 00:21:57.501 02:19:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:57.501 02:19:11 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:21:57.501 02:19:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:57.501 02:19:11 -- common/autotest_common.sh@10 -- # set +x 00:21:57.501 [2024-05-14 02:19:11.879973] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:57.501 [2024-05-14 02:19:11.888085] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:21:57.501 null0 00:21:57.501 [2024-05-14 02:19:11.920137] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:57.501 02:19:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:57.501 02:19:11 -- host/discovery_remove_ifc.sh@59 -- # hostpid=83970 00:21:57.501 02:19:11 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:21:57.501 02:19:11 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 83970 /tmp/host.sock 00:21:57.501 02:19:11 -- common/autotest_common.sh@819 -- # '[' -z 83970 ']' 00:21:57.501 02:19:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:21:57.501 02:19:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:57.501 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:21:57.501 02:19:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:21:57.501 02:19:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:57.501 02:19:11 -- common/autotest_common.sh@10 -- # set +x 00:21:57.501 [2024-05-14 02:19:12.000735] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:57.501 [2024-05-14 02:19:12.000852] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83970 ] 00:21:57.760 [2024-05-14 02:19:12.137484] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:57.760 [2024-05-14 02:19:12.190591] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:57.760 [2024-05-14 02:19:12.191037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:58.695 02:19:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:58.695 02:19:12 -- common/autotest_common.sh@852 -- # return 0 00:21:58.695 02:19:12 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:58.695 02:19:12 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:21:58.695 02:19:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.695 02:19:12 -- common/autotest_common.sh@10 -- # set +x 00:21:58.695 02:19:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.695 02:19:12 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:21:58.695 02:19:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.695 02:19:12 -- common/autotest_common.sh@10 -- # set +x 00:21:58.695 02:19:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.695 02:19:12 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:21:58.695 02:19:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.695 02:19:12 -- common/autotest_common.sh@10 -- # set +x 00:21:59.629 [2024-05-14 02:19:13.992895] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:59.629 [2024-05-14 02:19:13.992925] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:59.629 [2024-05-14 02:19:13.992959] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:59.629 [2024-05-14 02:19:14.079043] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:21:59.629 [2024-05-14 02:19:14.136174] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:21:59.629 [2024-05-14 02:19:14.136241] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:21:59.629 [2024-05-14 02:19:14.136265] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:21:59.629 [2024-05-14 02:19:14.136280] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:59.629 [2024-05-14 02:19:14.136304] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:59.629 02:19:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.629 02:19:14 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:21:59.629 02:19:14 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:59.629 02:19:14 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:59.629 02:19:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.629 02:19:14 -- common/autotest_common.sh@10 -- # set +x 00:21:59.629 [2024-05-14 02:19:14.141768] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x15b13e0 was disconnected and freed. delete nvme_qpair. 00:21:59.629 02:19:14 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:59.629 02:19:14 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:59.629 02:19:14 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:59.629 02:19:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.629 02:19:14 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:21:59.629 02:19:14 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:21:59.629 02:19:14 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:21:59.629 02:19:14 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:21:59.629 02:19:14 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:59.629 02:19:14 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:59.629 02:19:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.629 02:19:14 -- common/autotest_common.sh@10 -- # set +x 00:21:59.629 02:19:14 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:59.629 02:19:14 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:59.629 02:19:14 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:59.888 02:19:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.888 02:19:14 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:59.888 02:19:14 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:00.824 02:19:15 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:00.824 02:19:15 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:00.824 02:19:15 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:00.824 02:19:15 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:00.824 02:19:15 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:00.824 02:19:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:00.824 02:19:15 -- common/autotest_common.sh@10 -- # set +x 00:22:00.824 02:19:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:00.824 02:19:15 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:00.824 02:19:15 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:01.761 02:19:16 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:01.761 02:19:16 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:01.761 02:19:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:01.761 02:19:16 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:01.761 02:19:16 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:01.761 02:19:16 -- common/autotest_common.sh@10 -- # set +x 00:22:01.761 02:19:16 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:02.020 02:19:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:02.020 02:19:16 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:02.020 02:19:16 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:02.957 02:19:17 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:02.957 02:19:17 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:02.957 02:19:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:02.957 02:19:17 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:02.957 02:19:17 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:02.957 02:19:17 -- common/autotest_common.sh@10 -- # set +x 00:22:02.957 02:19:17 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:02.957 02:19:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:02.957 02:19:17 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:02.958 02:19:17 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:03.895 02:19:18 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:03.895 02:19:18 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:03.895 02:19:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:03.895 02:19:18 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:03.895 02:19:18 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:03.895 02:19:18 -- common/autotest_common.sh@10 -- # set +x 00:22:03.895 02:19:18 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:04.153 02:19:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:04.153 02:19:18 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:04.153 02:19:18 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:05.106 02:19:19 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:05.106 02:19:19 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:05.106 02:19:19 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:05.106 02:19:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:05.106 02:19:19 -- common/autotest_common.sh@10 -- # set +x 00:22:05.106 02:19:19 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:05.106 02:19:19 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:05.106 02:19:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:05.106 [2024-05-14 02:19:19.564188] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:22:05.106 [2024-05-14 02:19:19.564501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.106 [2024-05-14 02:19:19.564654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.106 [2024-05-14 02:19:19.564828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.106 [2024-05-14 02:19:19.564947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.106 [2024-05-14 02:19:19.564964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.106 [2024-05-14 02:19:19.564975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.106 [2024-05-14 02:19:19.564985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.106 [2024-05-14 02:19:19.564994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.106 [2024-05-14 02:19:19.565005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.106 [2024-05-14 02:19:19.565014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.107 [2024-05-14 02:19:19.565023] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157ac40 is same with the state(5) to be set 00:22:05.107 [2024-05-14 02:19:19.574169] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x157ac40 (9): Bad file descriptor 00:22:05.107 [2024-05-14 02:19:19.584188] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:05.107 02:19:19 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:05.107 02:19:19 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:06.051 02:19:20 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:06.051 02:19:20 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:06.051 02:19:20 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:06.051 02:19:20 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:06.051 02:19:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:06.051 02:19:20 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:06.051 02:19:20 -- common/autotest_common.sh@10 -- # set +x 00:22:06.310 [2024-05-14 02:19:20.647892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:22:07.247 [2024-05-14 02:19:21.671908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:22:07.247 [2024-05-14 02:19:21.672074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x157ac40 with addr=10.0.0.2, port=4420 00:22:07.247 [2024-05-14 02:19:21.672123] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157ac40 is same with the state(5) to be set 00:22:07.247 [2024-05-14 02:19:21.673086] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x157ac40 (9): Bad file descriptor 00:22:07.247 [2024-05-14 02:19:21.673156] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:07.247 [2024-05-14 02:19:21.673208] bdev_nvme.c:6504:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:22:07.247 [2024-05-14 02:19:21.673282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.247 [2024-05-14 02:19:21.673323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.247 [2024-05-14 02:19:21.673366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.247 [2024-05-14 02:19:21.673405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.247 [2024-05-14 02:19:21.673433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.247 [2024-05-14 02:19:21.673455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.247 [2024-05-14 02:19:21.673486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.247 [2024-05-14 02:19:21.673506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.247 [2024-05-14 02:19:21.673529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.247 [2024-05-14 02:19:21.673549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.247 [2024-05-14 02:19:21.673570] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:22:07.247 [2024-05-14 02:19:21.673648] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15221c0 (9): Bad file descriptor 00:22:07.247 [2024-05-14 02:19:21.674644] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:22:07.247 [2024-05-14 02:19:21.674713] nvme_ctrlr.c:1135:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:22:07.247 02:19:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:07.247 02:19:21 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:07.247 02:19:21 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:08.182 02:19:22 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:08.182 02:19:22 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:08.182 02:19:22 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:08.182 02:19:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:08.182 02:19:22 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:08.182 02:19:22 -- common/autotest_common.sh@10 -- # set +x 00:22:08.182 02:19:22 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:08.182 02:19:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:08.182 02:19:22 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:22:08.182 02:19:22 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:08.440 02:19:22 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:08.440 02:19:22 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:22:08.440 02:19:22 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:08.440 02:19:22 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:08.440 02:19:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:08.440 02:19:22 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:08.440 02:19:22 -- common/autotest_common.sh@10 -- # set +x 00:22:08.440 02:19:22 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:08.440 02:19:22 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:08.440 02:19:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:08.440 02:19:22 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:08.440 02:19:22 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:09.378 [2024-05-14 02:19:23.685862] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:09.378 [2024-05-14 02:19:23.685944] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:09.378 [2024-05-14 02:19:23.685965] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:09.379 [2024-05-14 02:19:23.772083] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:22:09.379 [2024-05-14 02:19:23.827493] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:09.379 [2024-05-14 02:19:23.827544] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:09.379 [2024-05-14 02:19:23.827567] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:09.379 [2024-05-14 02:19:23.827582] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:22:09.379 [2024-05-14 02:19:23.827591] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:09.379 [2024-05-14 02:19:23.834491] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x156b3d0 was disconnected and freed. delete nvme_qpair. 00:22:09.379 02:19:23 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:09.379 02:19:23 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:09.379 02:19:23 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:09.379 02:19:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:09.379 02:19:23 -- common/autotest_common.sh@10 -- # set +x 00:22:09.379 02:19:23 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:09.379 02:19:23 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:09.379 02:19:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:09.379 02:19:23 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:22:09.379 02:19:23 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:22:09.379 02:19:23 -- host/discovery_remove_ifc.sh@90 -- # killprocess 83970 00:22:09.379 02:19:23 -- common/autotest_common.sh@926 -- # '[' -z 83970 ']' 00:22:09.379 02:19:23 -- common/autotest_common.sh@930 -- # kill -0 83970 00:22:09.379 02:19:23 -- common/autotest_common.sh@931 -- # uname 00:22:09.379 02:19:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:09.379 02:19:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 83970 00:22:09.379 killing process with pid 83970 00:22:09.379 02:19:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:09.379 02:19:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:09.379 02:19:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 83970' 00:22:09.379 02:19:23 -- common/autotest_common.sh@945 -- # kill 83970 00:22:09.379 02:19:23 -- common/autotest_common.sh@950 -- # wait 83970 00:22:09.638 02:19:24 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:22:09.638 02:19:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:09.638 02:19:24 -- nvmf/common.sh@116 -- # sync 00:22:09.638 02:19:24 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:09.638 02:19:24 -- nvmf/common.sh@119 -- # set +e 00:22:09.638 02:19:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:09.638 02:19:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:09.638 rmmod nvme_tcp 00:22:09.638 rmmod nvme_fabrics 00:22:09.638 rmmod nvme_keyring 00:22:09.897 02:19:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:09.897 02:19:24 -- nvmf/common.sh@123 -- # set -e 00:22:09.897 02:19:24 -- nvmf/common.sh@124 -- # return 0 00:22:09.897 02:19:24 -- nvmf/common.sh@477 -- # '[' -n 83918 ']' 00:22:09.897 02:19:24 -- nvmf/common.sh@478 -- # killprocess 83918 00:22:09.897 02:19:24 -- common/autotest_common.sh@926 -- # '[' -z 83918 ']' 00:22:09.897 02:19:24 -- common/autotest_common.sh@930 -- # kill -0 83918 00:22:09.897 02:19:24 -- common/autotest_common.sh@931 -- # uname 00:22:09.897 02:19:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:09.897 02:19:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 83918 00:22:09.897 killing process with pid 83918 00:22:09.897 02:19:24 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:09.897 02:19:24 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:09.897 02:19:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 83918' 00:22:09.897 02:19:24 -- common/autotest_common.sh@945 -- # kill 83918 00:22:09.897 02:19:24 -- common/autotest_common.sh@950 -- # wait 83918 00:22:10.156 02:19:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:10.156 02:19:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:10.156 02:19:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:10.156 02:19:24 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:10.156 02:19:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:10.156 02:19:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:10.156 02:19:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:10.156 02:19:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:10.156 02:19:24 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:10.156 00:22:10.156 real 0m14.267s 00:22:10.156 user 0m24.507s 00:22:10.156 sys 0m1.515s 00:22:10.156 02:19:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:10.156 02:19:24 -- common/autotest_common.sh@10 -- # set +x 00:22:10.156 ************************************ 00:22:10.156 END TEST nvmf_discovery_remove_ifc 00:22:10.156 ************************************ 00:22:10.156 02:19:24 -- nvmf/nvmf.sh@105 -- # [[ tcp == \t\c\p ]] 00:22:10.156 02:19:24 -- nvmf/nvmf.sh@106 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:22:10.156 02:19:24 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:10.156 02:19:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:10.156 02:19:24 -- common/autotest_common.sh@10 -- # set +x 00:22:10.156 ************************************ 00:22:10.156 START TEST nvmf_digest 00:22:10.156 ************************************ 00:22:10.156 02:19:24 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:22:10.156 * Looking for test storage... 00:22:10.156 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:10.156 02:19:24 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:10.156 02:19:24 -- nvmf/common.sh@7 -- # uname -s 00:22:10.156 02:19:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:10.156 02:19:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:10.156 02:19:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:10.156 02:19:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:10.156 02:19:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:10.156 02:19:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:10.156 02:19:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:10.156 02:19:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:10.156 02:19:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:10.156 02:19:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:10.156 02:19:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:22:10.156 02:19:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:22:10.156 02:19:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:10.156 02:19:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:10.156 02:19:24 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:10.157 02:19:24 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:10.157 02:19:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:10.157 02:19:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:10.157 02:19:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:10.157 02:19:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.157 02:19:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.157 02:19:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.157 02:19:24 -- paths/export.sh@5 -- # export PATH 00:22:10.157 02:19:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.157 02:19:24 -- nvmf/common.sh@46 -- # : 0 00:22:10.157 02:19:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:10.157 02:19:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:10.157 02:19:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:10.157 02:19:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:10.157 02:19:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:10.157 02:19:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:10.157 02:19:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:10.157 02:19:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:10.157 02:19:24 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:22:10.157 02:19:24 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:22:10.157 02:19:24 -- host/digest.sh@16 -- # runtime=2 00:22:10.157 02:19:24 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:22:10.157 02:19:24 -- host/digest.sh@132 -- # nvmftestinit 00:22:10.157 02:19:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:10.157 02:19:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:10.157 02:19:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:10.157 02:19:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:10.157 02:19:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:10.157 02:19:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:10.157 02:19:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:10.157 02:19:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:10.157 02:19:24 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:10.157 02:19:24 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:10.157 02:19:24 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:10.157 02:19:24 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:10.157 02:19:24 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:10.157 02:19:24 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:10.157 02:19:24 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:10.157 02:19:24 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:10.157 02:19:24 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:10.157 02:19:24 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:10.157 02:19:24 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:10.157 02:19:24 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:10.157 02:19:24 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:10.157 02:19:24 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:10.157 02:19:24 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:10.157 02:19:24 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:10.157 02:19:24 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:10.157 02:19:24 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:10.157 02:19:24 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:10.157 02:19:24 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:10.157 Cannot find device "nvmf_tgt_br" 00:22:10.157 02:19:24 -- nvmf/common.sh@154 -- # true 00:22:10.157 02:19:24 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:10.415 Cannot find device "nvmf_tgt_br2" 00:22:10.415 02:19:24 -- nvmf/common.sh@155 -- # true 00:22:10.415 02:19:24 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:10.415 02:19:24 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:10.415 Cannot find device "nvmf_tgt_br" 00:22:10.415 02:19:24 -- nvmf/common.sh@157 -- # true 00:22:10.415 02:19:24 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:10.415 Cannot find device "nvmf_tgt_br2" 00:22:10.415 02:19:24 -- nvmf/common.sh@158 -- # true 00:22:10.415 02:19:24 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:10.415 02:19:24 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:10.415 02:19:24 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:10.415 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:10.415 02:19:24 -- nvmf/common.sh@161 -- # true 00:22:10.415 02:19:24 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:10.415 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:10.415 02:19:24 -- nvmf/common.sh@162 -- # true 00:22:10.415 02:19:24 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:10.415 02:19:24 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:10.415 02:19:24 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:10.415 02:19:24 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:10.415 02:19:24 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:10.415 02:19:24 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:10.415 02:19:24 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:10.415 02:19:24 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:10.415 02:19:24 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:10.415 02:19:24 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:10.415 02:19:24 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:10.415 02:19:24 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:10.415 02:19:24 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:10.415 02:19:24 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:10.415 02:19:24 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:10.415 02:19:24 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:10.415 02:19:24 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:10.415 02:19:24 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:10.415 02:19:24 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:10.415 02:19:24 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:10.415 02:19:24 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:10.415 02:19:24 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:10.415 02:19:25 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:10.673 02:19:25 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:10.673 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:10.673 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:22:10.673 00:22:10.673 --- 10.0.0.2 ping statistics --- 00:22:10.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:10.673 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:22:10.673 02:19:25 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:10.673 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:10.673 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:22:10.673 00:22:10.673 --- 10.0.0.3 ping statistics --- 00:22:10.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:10.673 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:22:10.673 02:19:25 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:10.673 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:10.673 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:22:10.673 00:22:10.673 --- 10.0.0.1 ping statistics --- 00:22:10.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:10.673 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:22:10.673 02:19:25 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:10.673 02:19:25 -- nvmf/common.sh@421 -- # return 0 00:22:10.673 02:19:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:10.673 02:19:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:10.673 02:19:25 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:10.673 02:19:25 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:10.673 02:19:25 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:10.673 02:19:25 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:10.674 02:19:25 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:10.674 02:19:25 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:10.674 02:19:25 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:22:10.674 02:19:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:10.674 02:19:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:10.674 02:19:25 -- common/autotest_common.sh@10 -- # set +x 00:22:10.674 ************************************ 00:22:10.674 START TEST nvmf_digest_clean 00:22:10.674 ************************************ 00:22:10.674 02:19:25 -- common/autotest_common.sh@1104 -- # run_digest 00:22:10.674 02:19:25 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:22:10.674 02:19:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:10.674 02:19:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:10.674 02:19:25 -- common/autotest_common.sh@10 -- # set +x 00:22:10.674 02:19:25 -- nvmf/common.sh@469 -- # nvmfpid=84378 00:22:10.674 02:19:25 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:10.674 02:19:25 -- nvmf/common.sh@470 -- # waitforlisten 84378 00:22:10.674 02:19:25 -- common/autotest_common.sh@819 -- # '[' -z 84378 ']' 00:22:10.674 02:19:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:10.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:10.674 02:19:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:10.674 02:19:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:10.674 02:19:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:10.674 02:19:25 -- common/autotest_common.sh@10 -- # set +x 00:22:10.674 [2024-05-14 02:19:25.112610] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:10.674 [2024-05-14 02:19:25.112685] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:10.674 [2024-05-14 02:19:25.254866] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:10.931 [2024-05-14 02:19:25.330818] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:10.931 [2024-05-14 02:19:25.331023] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:10.931 [2024-05-14 02:19:25.331038] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:10.931 [2024-05-14 02:19:25.331049] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:10.931 [2024-05-14 02:19:25.331082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:11.867 02:19:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:11.867 02:19:26 -- common/autotest_common.sh@852 -- # return 0 00:22:11.867 02:19:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:11.867 02:19:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:11.867 02:19:26 -- common/autotest_common.sh@10 -- # set +x 00:22:11.867 02:19:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:11.867 02:19:26 -- host/digest.sh@120 -- # common_target_config 00:22:11.867 02:19:26 -- host/digest.sh@43 -- # rpc_cmd 00:22:11.867 02:19:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:11.867 02:19:26 -- common/autotest_common.sh@10 -- # set +x 00:22:11.867 null0 00:22:11.867 [2024-05-14 02:19:26.212762] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:11.867 [2024-05-14 02:19:26.236914] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:11.867 02:19:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:11.867 02:19:26 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:22:11.867 02:19:26 -- host/digest.sh@77 -- # local rw bs qd 00:22:11.867 02:19:26 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:11.867 02:19:26 -- host/digest.sh@80 -- # rw=randread 00:22:11.867 02:19:26 -- host/digest.sh@80 -- # bs=4096 00:22:11.867 02:19:26 -- host/digest.sh@80 -- # qd=128 00:22:11.867 02:19:26 -- host/digest.sh@82 -- # bperfpid=84434 00:22:11.867 02:19:26 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:22:11.867 02:19:26 -- host/digest.sh@83 -- # waitforlisten 84434 /var/tmp/bperf.sock 00:22:11.867 02:19:26 -- common/autotest_common.sh@819 -- # '[' -z 84434 ']' 00:22:11.867 02:19:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:11.867 02:19:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:11.867 02:19:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:11.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:11.867 02:19:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:11.867 02:19:26 -- common/autotest_common.sh@10 -- # set +x 00:22:11.867 [2024-05-14 02:19:26.296768] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:11.867 [2024-05-14 02:19:26.296885] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84434 ] 00:22:11.867 [2024-05-14 02:19:26.434631] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:12.126 [2024-05-14 02:19:26.505870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:12.126 02:19:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:12.126 02:19:26 -- common/autotest_common.sh@852 -- # return 0 00:22:12.126 02:19:26 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:12.126 02:19:26 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:12.126 02:19:26 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:12.385 02:19:26 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:12.385 02:19:26 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:12.644 nvme0n1 00:22:12.644 02:19:27 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:12.644 02:19:27 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:12.903 Running I/O for 2 seconds... 00:22:14.802 00:22:14.802 Latency(us) 00:22:14.802 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:14.802 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:14.802 nvme0n1 : 2.00 17054.24 66.62 0.00 0.00 7497.99 3410.85 15728.64 00:22:14.802 =================================================================================================================== 00:22:14.802 Total : 17054.24 66.62 0.00 0.00 7497.99 3410.85 15728.64 00:22:14.802 0 00:22:14.802 02:19:29 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:14.802 02:19:29 -- host/digest.sh@92 -- # get_accel_stats 00:22:14.802 02:19:29 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:14.802 02:19:29 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:14.802 02:19:29 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:14.802 | select(.opcode=="crc32c") 00:22:14.802 | "\(.module_name) \(.executed)"' 00:22:15.060 02:19:29 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:15.060 02:19:29 -- host/digest.sh@93 -- # exp_module=software 00:22:15.060 02:19:29 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:15.060 02:19:29 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:15.060 02:19:29 -- host/digest.sh@97 -- # killprocess 84434 00:22:15.060 02:19:29 -- common/autotest_common.sh@926 -- # '[' -z 84434 ']' 00:22:15.060 02:19:29 -- common/autotest_common.sh@930 -- # kill -0 84434 00:22:15.060 02:19:29 -- common/autotest_common.sh@931 -- # uname 00:22:15.060 02:19:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:15.060 02:19:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 84434 00:22:15.317 02:19:29 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:15.317 killing process with pid 84434 00:22:15.317 02:19:29 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:15.317 02:19:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 84434' 00:22:15.317 Received shutdown signal, test time was about 2.000000 seconds 00:22:15.317 00:22:15.317 Latency(us) 00:22:15.317 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:15.318 =================================================================================================================== 00:22:15.318 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:15.318 02:19:29 -- common/autotest_common.sh@945 -- # kill 84434 00:22:15.318 02:19:29 -- common/autotest_common.sh@950 -- # wait 84434 00:22:15.318 02:19:29 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:22:15.318 02:19:29 -- host/digest.sh@77 -- # local rw bs qd 00:22:15.318 02:19:29 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:15.318 02:19:29 -- host/digest.sh@80 -- # rw=randread 00:22:15.318 02:19:29 -- host/digest.sh@80 -- # bs=131072 00:22:15.318 02:19:29 -- host/digest.sh@80 -- # qd=16 00:22:15.318 02:19:29 -- host/digest.sh@82 -- # bperfpid=84505 00:22:15.318 02:19:29 -- host/digest.sh@83 -- # waitforlisten 84505 /var/tmp/bperf.sock 00:22:15.318 02:19:29 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:22:15.318 02:19:29 -- common/autotest_common.sh@819 -- # '[' -z 84505 ']' 00:22:15.318 02:19:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:15.318 02:19:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:15.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:15.318 02:19:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:15.318 02:19:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:15.318 02:19:29 -- common/autotest_common.sh@10 -- # set +x 00:22:15.575 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:15.575 Zero copy mechanism will not be used. 00:22:15.575 [2024-05-14 02:19:29.933634] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:15.575 [2024-05-14 02:19:29.933747] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84505 ] 00:22:15.575 [2024-05-14 02:19:30.076841] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:15.575 [2024-05-14 02:19:30.138676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:16.510 02:19:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:16.510 02:19:30 -- common/autotest_common.sh@852 -- # return 0 00:22:16.510 02:19:30 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:16.510 02:19:30 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:16.510 02:19:30 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:16.768 02:19:31 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:16.768 02:19:31 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:17.027 nvme0n1 00:22:17.027 02:19:31 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:17.027 02:19:31 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:17.285 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:17.285 Zero copy mechanism will not be used. 00:22:17.285 Running I/O for 2 seconds... 00:22:19.217 00:22:19.217 Latency(us) 00:22:19.217 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:19.217 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:22:19.217 nvme0n1 : 2.00 7512.89 939.11 0.00 0.00 2125.71 912.29 4021.53 00:22:19.217 =================================================================================================================== 00:22:19.217 Total : 7512.89 939.11 0.00 0.00 2125.71 912.29 4021.53 00:22:19.217 0 00:22:19.217 02:19:33 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:19.217 02:19:33 -- host/digest.sh@92 -- # get_accel_stats 00:22:19.217 02:19:33 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:19.217 02:19:33 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:19.217 | select(.opcode=="crc32c") 00:22:19.217 | "\(.module_name) \(.executed)"' 00:22:19.217 02:19:33 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:19.476 02:19:33 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:19.476 02:19:33 -- host/digest.sh@93 -- # exp_module=software 00:22:19.476 02:19:33 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:19.476 02:19:33 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:19.476 02:19:33 -- host/digest.sh@97 -- # killprocess 84505 00:22:19.476 02:19:33 -- common/autotest_common.sh@926 -- # '[' -z 84505 ']' 00:22:19.476 02:19:33 -- common/autotest_common.sh@930 -- # kill -0 84505 00:22:19.476 02:19:33 -- common/autotest_common.sh@931 -- # uname 00:22:19.476 02:19:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:19.476 02:19:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 84505 00:22:19.476 02:19:33 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:19.476 killing process with pid 84505 00:22:19.476 02:19:33 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:19.476 02:19:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 84505' 00:22:19.476 02:19:33 -- common/autotest_common.sh@945 -- # kill 84505 00:22:19.476 Received shutdown signal, test time was about 2.000000 seconds 00:22:19.476 00:22:19.476 Latency(us) 00:22:19.476 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:19.476 =================================================================================================================== 00:22:19.476 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:19.476 02:19:33 -- common/autotest_common.sh@950 -- # wait 84505 00:22:19.735 02:19:34 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:22:19.735 02:19:34 -- host/digest.sh@77 -- # local rw bs qd 00:22:19.735 02:19:34 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:19.735 02:19:34 -- host/digest.sh@80 -- # rw=randwrite 00:22:19.735 02:19:34 -- host/digest.sh@80 -- # bs=4096 00:22:19.735 02:19:34 -- host/digest.sh@80 -- # qd=128 00:22:19.735 02:19:34 -- host/digest.sh@82 -- # bperfpid=84590 00:22:19.735 02:19:34 -- host/digest.sh@83 -- # waitforlisten 84590 /var/tmp/bperf.sock 00:22:19.735 02:19:34 -- common/autotest_common.sh@819 -- # '[' -z 84590 ']' 00:22:19.735 02:19:34 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:22:19.735 02:19:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:19.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:19.735 02:19:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:19.735 02:19:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:19.735 02:19:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:19.735 02:19:34 -- common/autotest_common.sh@10 -- # set +x 00:22:19.735 [2024-05-14 02:19:34.276743] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:19.735 [2024-05-14 02:19:34.276903] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84590 ] 00:22:19.993 [2024-05-14 02:19:34.423826] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:19.993 [2024-05-14 02:19:34.486758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:20.928 02:19:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:20.928 02:19:35 -- common/autotest_common.sh@852 -- # return 0 00:22:20.928 02:19:35 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:20.928 02:19:35 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:20.928 02:19:35 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:21.186 02:19:35 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:21.186 02:19:35 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:21.445 nvme0n1 00:22:21.445 02:19:35 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:21.445 02:19:35 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:21.703 Running I/O for 2 seconds... 00:22:23.606 00:22:23.606 Latency(us) 00:22:23.606 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:23.606 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:23.606 nvme0n1 : 2.00 19772.36 77.24 0.00 0.00 6463.68 2576.76 16920.20 00:22:23.606 =================================================================================================================== 00:22:23.606 Total : 19772.36 77.24 0.00 0.00 6463.68 2576.76 16920.20 00:22:23.606 0 00:22:23.606 02:19:38 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:23.606 02:19:38 -- host/digest.sh@92 -- # get_accel_stats 00:22:23.606 02:19:38 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:23.606 02:19:38 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:23.606 | select(.opcode=="crc32c") 00:22:23.606 | "\(.module_name) \(.executed)"' 00:22:23.606 02:19:38 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:23.865 02:19:38 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:23.865 02:19:38 -- host/digest.sh@93 -- # exp_module=software 00:22:23.865 02:19:38 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:23.865 02:19:38 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:23.865 02:19:38 -- host/digest.sh@97 -- # killprocess 84590 00:22:23.865 02:19:38 -- common/autotest_common.sh@926 -- # '[' -z 84590 ']' 00:22:23.865 02:19:38 -- common/autotest_common.sh@930 -- # kill -0 84590 00:22:23.865 02:19:38 -- common/autotest_common.sh@931 -- # uname 00:22:23.865 02:19:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:23.865 02:19:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 84590 00:22:23.865 02:19:38 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:23.865 killing process with pid 84590 00:22:23.865 02:19:38 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:23.865 02:19:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 84590' 00:22:23.865 02:19:38 -- common/autotest_common.sh@945 -- # kill 84590 00:22:23.865 Received shutdown signal, test time was about 2.000000 seconds 00:22:23.865 00:22:23.865 Latency(us) 00:22:23.865 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:23.865 =================================================================================================================== 00:22:23.865 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:23.865 02:19:38 -- common/autotest_common.sh@950 -- # wait 84590 00:22:24.123 02:19:38 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:22:24.123 02:19:38 -- host/digest.sh@77 -- # local rw bs qd 00:22:24.124 02:19:38 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:24.124 02:19:38 -- host/digest.sh@80 -- # rw=randwrite 00:22:24.124 02:19:38 -- host/digest.sh@80 -- # bs=131072 00:22:24.124 02:19:38 -- host/digest.sh@80 -- # qd=16 00:22:24.124 02:19:38 -- host/digest.sh@82 -- # bperfpid=84686 00:22:24.124 02:19:38 -- host/digest.sh@83 -- # waitforlisten 84686 /var/tmp/bperf.sock 00:22:24.124 02:19:38 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:22:24.124 02:19:38 -- common/autotest_common.sh@819 -- # '[' -z 84686 ']' 00:22:24.124 02:19:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:24.124 02:19:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:24.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:24.124 02:19:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:24.124 02:19:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:24.124 02:19:38 -- common/autotest_common.sh@10 -- # set +x 00:22:24.124 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:24.124 Zero copy mechanism will not be used. 00:22:24.124 [2024-05-14 02:19:38.651681] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:24.124 [2024-05-14 02:19:38.651757] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84686 ] 00:22:24.383 [2024-05-14 02:19:38.789384] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:24.383 [2024-05-14 02:19:38.852556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:24.383 02:19:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:24.383 02:19:38 -- common/autotest_common.sh@852 -- # return 0 00:22:24.383 02:19:38 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:24.383 02:19:38 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:24.383 02:19:38 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:24.642 02:19:39 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:24.642 02:19:39 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:25.209 nvme0n1 00:22:25.209 02:19:39 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:25.209 02:19:39 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:25.209 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:25.209 Zero copy mechanism will not be used. 00:22:25.209 Running I/O for 2 seconds... 00:22:27.114 00:22:27.114 Latency(us) 00:22:27.114 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:27.114 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:22:27.114 nvme0n1 : 2.00 6240.56 780.07 0.00 0.00 2557.76 1593.72 6166.34 00:22:27.114 =================================================================================================================== 00:22:27.114 Total : 6240.56 780.07 0.00 0.00 2557.76 1593.72 6166.34 00:22:27.114 0 00:22:27.114 02:19:41 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:27.114 02:19:41 -- host/digest.sh@92 -- # get_accel_stats 00:22:27.114 02:19:41 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:27.114 02:19:41 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:27.114 | select(.opcode=="crc32c") 00:22:27.114 | "\(.module_name) \(.executed)"' 00:22:27.114 02:19:41 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:27.373 02:19:41 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:27.373 02:19:41 -- host/digest.sh@93 -- # exp_module=software 00:22:27.373 02:19:41 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:27.373 02:19:41 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:27.373 02:19:41 -- host/digest.sh@97 -- # killprocess 84686 00:22:27.373 02:19:41 -- common/autotest_common.sh@926 -- # '[' -z 84686 ']' 00:22:27.373 02:19:41 -- common/autotest_common.sh@930 -- # kill -0 84686 00:22:27.373 02:19:41 -- common/autotest_common.sh@931 -- # uname 00:22:27.632 02:19:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:27.632 02:19:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 84686 00:22:27.632 killing process with pid 84686 00:22:27.632 Received shutdown signal, test time was about 2.000000 seconds 00:22:27.632 00:22:27.632 Latency(us) 00:22:27.632 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:27.632 =================================================================================================================== 00:22:27.632 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:27.632 02:19:41 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:27.632 02:19:41 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:27.632 02:19:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 84686' 00:22:27.632 02:19:41 -- common/autotest_common.sh@945 -- # kill 84686 00:22:27.632 02:19:41 -- common/autotest_common.sh@950 -- # wait 84686 00:22:27.632 02:19:42 -- host/digest.sh@126 -- # killprocess 84378 00:22:27.632 02:19:42 -- common/autotest_common.sh@926 -- # '[' -z 84378 ']' 00:22:27.632 02:19:42 -- common/autotest_common.sh@930 -- # kill -0 84378 00:22:27.632 02:19:42 -- common/autotest_common.sh@931 -- # uname 00:22:27.632 02:19:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:27.632 02:19:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 84378 00:22:27.632 killing process with pid 84378 00:22:27.632 02:19:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:27.632 02:19:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:27.632 02:19:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 84378' 00:22:27.632 02:19:42 -- common/autotest_common.sh@945 -- # kill 84378 00:22:27.632 02:19:42 -- common/autotest_common.sh@950 -- # wait 84378 00:22:27.891 00:22:27.891 real 0m17.395s 00:22:27.891 user 0m33.177s 00:22:27.891 sys 0m4.389s 00:22:27.891 02:19:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:27.891 ************************************ 00:22:27.891 END TEST nvmf_digest_clean 00:22:27.891 ************************************ 00:22:27.891 02:19:42 -- common/autotest_common.sh@10 -- # set +x 00:22:27.891 02:19:42 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:22:27.891 02:19:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:27.891 02:19:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:27.891 02:19:42 -- common/autotest_common.sh@10 -- # set +x 00:22:28.151 ************************************ 00:22:28.151 START TEST nvmf_digest_error 00:22:28.151 ************************************ 00:22:28.151 02:19:42 -- common/autotest_common.sh@1104 -- # run_digest_error 00:22:28.151 02:19:42 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:22:28.151 02:19:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:28.151 02:19:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:28.151 02:19:42 -- common/autotest_common.sh@10 -- # set +x 00:22:28.151 02:19:42 -- nvmf/common.sh@469 -- # nvmfpid=84786 00:22:28.151 02:19:42 -- nvmf/common.sh@470 -- # waitforlisten 84786 00:22:28.151 02:19:42 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:28.151 02:19:42 -- common/autotest_common.sh@819 -- # '[' -z 84786 ']' 00:22:28.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:28.151 02:19:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:28.151 02:19:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:28.151 02:19:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:28.151 02:19:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:28.151 02:19:42 -- common/autotest_common.sh@10 -- # set +x 00:22:28.151 [2024-05-14 02:19:42.554382] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:28.151 [2024-05-14 02:19:42.554489] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:28.151 [2024-05-14 02:19:42.696861] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.411 [2024-05-14 02:19:42.764587] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:28.411 [2024-05-14 02:19:42.764775] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:28.411 [2024-05-14 02:19:42.764789] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:28.411 [2024-05-14 02:19:42.764832] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:28.411 [2024-05-14 02:19:42.764865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:28.411 02:19:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:28.411 02:19:42 -- common/autotest_common.sh@852 -- # return 0 00:22:28.411 02:19:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:28.411 02:19:42 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:28.411 02:19:42 -- common/autotest_common.sh@10 -- # set +x 00:22:28.411 02:19:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:28.411 02:19:42 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:22:28.411 02:19:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:28.411 02:19:42 -- common/autotest_common.sh@10 -- # set +x 00:22:28.411 [2024-05-14 02:19:42.849276] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:22:28.411 02:19:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:28.411 02:19:42 -- host/digest.sh@104 -- # common_target_config 00:22:28.411 02:19:42 -- host/digest.sh@43 -- # rpc_cmd 00:22:28.411 02:19:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:28.411 02:19:42 -- common/autotest_common.sh@10 -- # set +x 00:22:28.411 null0 00:22:28.411 [2024-05-14 02:19:42.936302] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:28.411 [2024-05-14 02:19:42.960515] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:28.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:28.411 02:19:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:28.411 02:19:42 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:22:28.411 02:19:42 -- host/digest.sh@54 -- # local rw bs qd 00:22:28.411 02:19:42 -- host/digest.sh@56 -- # rw=randread 00:22:28.411 02:19:42 -- host/digest.sh@56 -- # bs=4096 00:22:28.411 02:19:42 -- host/digest.sh@56 -- # qd=128 00:22:28.411 02:19:42 -- host/digest.sh@58 -- # bperfpid=84815 00:22:28.411 02:19:42 -- host/digest.sh@60 -- # waitforlisten 84815 /var/tmp/bperf.sock 00:22:28.411 02:19:42 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:22:28.411 02:19:42 -- common/autotest_common.sh@819 -- # '[' -z 84815 ']' 00:22:28.411 02:19:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:28.411 02:19:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:28.411 02:19:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:28.411 02:19:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:28.411 02:19:42 -- common/autotest_common.sh@10 -- # set +x 00:22:28.670 [2024-05-14 02:19:43.024436] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:28.670 [2024-05-14 02:19:43.024684] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84815 ] 00:22:28.670 [2024-05-14 02:19:43.164561] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.929 [2024-05-14 02:19:43.262588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:29.497 02:19:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:29.497 02:19:44 -- common/autotest_common.sh@852 -- # return 0 00:22:29.497 02:19:44 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:29.497 02:19:44 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:29.755 02:19:44 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:29.755 02:19:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.755 02:19:44 -- common/autotest_common.sh@10 -- # set +x 00:22:29.755 02:19:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.755 02:19:44 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:29.756 02:19:44 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:30.376 nvme0n1 00:22:30.376 02:19:44 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:22:30.376 02:19:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:30.376 02:19:44 -- common/autotest_common.sh@10 -- # set +x 00:22:30.376 02:19:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:30.376 02:19:44 -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:30.376 02:19:44 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:30.376 Running I/O for 2 seconds... 00:22:30.376 [2024-05-14 02:19:44.788725] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:30.376 [2024-05-14 02:19:44.788783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:8603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.376 [2024-05-14 02:19:44.788816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.376 [2024-05-14 02:19:44.805966] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:30.376 [2024-05-14 02:19:44.806007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.376 [2024-05-14 02:19:44.806021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.377 [2024-05-14 02:19:44.822699] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:30.377 [2024-05-14 02:19:44.822770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:5695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.377 [2024-05-14 02:19:44.822830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.377 [2024-05-14 02:19:44.839631] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:30.377 [2024-05-14 02:19:44.839687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.377 [2024-05-14 02:19:44.839717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.377 [2024-05-14 02:19:44.856429] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:30.377 [2024-05-14 02:19:44.856484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.377 [2024-05-14 02:19:44.856514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.377 [2024-05-14 02:19:44.874102] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:30.377 [2024-05-14 02:19:44.874142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.377 [2024-05-14 02:19:44.874158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.377 [2024-05-14 02:19:44.890616] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:30.377 [2024-05-14 02:19:44.890655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.377 [2024-05-14 02:19:44.890668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.377 [2024-05-14 02:19:44.908944] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:30.377 [2024-05-14 02:19:44.909001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.377 [2024-05-14 02:19:44.909031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.377 [2024-05-14 02:19:44.925310] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:30.377 [2024-05-14 02:19:44.925348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.377 [2024-05-14 02:19:44.925378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.377 [2024-05-14 02:19:44.942437] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:30.377 [2024-05-14 02:19:44.942478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.377 [2024-05-14 02:19:44.942508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.377 [2024-05-14 02:19:44.960497] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:30.377 [2024-05-14 02:19:44.960570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.377 [2024-05-14 02:19:44.960585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.639 [2024-05-14 02:19:44.977772] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:30.639 [2024-05-14 02:19:44.977835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:15254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.639 [2024-05-14 02:19:44.977865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.639 [2024-05-14 02:19:44.993897] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:30.639 [2024-05-14 02:19:44.993969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:19302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.640 [2024-05-14 02:19:44.993984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.640 [2024-05-14 02:19:45.006560] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:30.640 [2024-05-14 02:19:45.006598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:21843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.640 [2024-05-14 02:19:45.006628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.640 [2024-05-14 02:19:45.023384] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:30.640 [2024-05-14 02:19:45.023424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:25318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.640 [2024-05-14 02:19:45.023454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.640 [2024-05-14 02:19:45.040169] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:30.640 [2024-05-14 02:19:45.040253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.640 [2024-05-14 02:19:45.040284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.640 [2024-05-14 02:19:45.054584] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:30.640 [2024-05-14 02:19:45.054641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:18387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.640 [2024-05-14 02:19:45.054671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.640 [2024-05-14 02:19:45.067814] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:30.640 [2024-05-14 02:19:45.067866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.640 [2024-05-14 02:19:45.067896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.640 [2024-05-14 02:19:45.079517] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:30.640 [2024-05-14 02:19:45.079593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.640 [2024-05-14 02:19:45.079622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.640 [2024-05-14 02:19:45.097437] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:30.640 [2024-05-14 02:19:45.097477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:21666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.640 [2024-05-14 02:19:45.097491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.640 [2024-05-14 02:19:45.114471] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:30.640 [2024-05-14 02:19:45.114526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:18731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.640 [2024-05-14 02:19:45.114556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.640 [2024-05-14 02:19:45.131669] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:30.640 [2024-05-14 02:19:45.131726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:1624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.640 [2024-05-14 02:19:45.131756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.640 [2024-05-14 02:19:45.146635] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:30.640 [2024-05-14 02:19:45.146689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.640 [2024-05-14 02:19:45.146719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.640 [2024-05-14 02:19:45.159386] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:30.640 [2024-05-14 02:19:45.159442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:16919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.640 [2024-05-14 02:19:45.159472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.640 [2024-05-14 02:19:45.175826] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:30.640 [2024-05-14 02:19:45.175877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:8409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.640 [2024-05-14 02:19:45.175892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.640 [2024-05-14 02:19:45.190451] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:30.640 [2024-05-14 02:19:45.190488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.640 [2024-05-14 02:19:45.190517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.640 [2024-05-14 02:19:45.205568] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:30.640 [2024-05-14 02:19:45.205639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:21482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.640 [2024-05-14 02:19:45.205685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.640 [2024-05-14 02:19:45.220330] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:30.640 [2024-05-14 02:19:45.220399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.640 [2024-05-14 02:19:45.220428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.900 [2024-05-14 02:19:45.235904] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:30.900 [2024-05-14 02:19:45.235950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:17719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.900 [2024-05-14 02:19:45.235964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.900 [2024-05-14 02:19:45.248866] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:30.900 [2024-05-14 02:19:45.248903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:14677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.900 [2024-05-14 02:19:45.248916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.900 [2024-05-14 02:19:45.265995] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:30.900 [2024-05-14 02:19:45.266034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:22863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.900 [2024-05-14 02:19:45.266049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.900 [2024-05-14 02:19:45.281841] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:30.900 [2024-05-14 02:19:45.281905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:13982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.900 [2024-05-14 02:19:45.281927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.900 [2024-05-14 02:19:45.297928] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:30.901 [2024-05-14 02:19:45.297966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.901 [2024-05-14 02:19:45.297980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.901 [2024-05-14 02:19:45.315627] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:30.901 [2024-05-14 02:19:45.315683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:3363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.901 [2024-05-14 02:19:45.315713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.901 [2024-05-14 02:19:45.332295] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:30.901 [2024-05-14 02:19:45.332333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.901 [2024-05-14 02:19:45.332363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.901 [2024-05-14 02:19:45.350512] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:30.901 [2024-05-14 02:19:45.350566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:5406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.901 [2024-05-14 02:19:45.350580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.901 [2024-05-14 02:19:45.366823] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:30.901 [2024-05-14 02:19:45.366870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.901 [2024-05-14 02:19:45.366885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.901 [2024-05-14 02:19:45.380424] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:30.901 [2024-05-14 02:19:45.380510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.901 [2024-05-14 02:19:45.380540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.901 [2024-05-14 02:19:45.395360] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:30.901 [2024-05-14 02:19:45.395431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:18386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.901 [2024-05-14 02:19:45.395462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.901 [2024-05-14 02:19:45.411482] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:30.901 [2024-05-14 02:19:45.411538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.901 [2024-05-14 02:19:45.411551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.901 [2024-05-14 02:19:45.428270] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:30.901 [2024-05-14 02:19:45.428325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:3577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.901 [2024-05-14 02:19:45.428356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.901 [2024-05-14 02:19:45.443925] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:30.901 [2024-05-14 02:19:45.443977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:18010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.901 [2024-05-14 02:19:45.443991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.901 [2024-05-14 02:19:45.463276] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:30.901 [2024-05-14 02:19:45.463331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.901 [2024-05-14 02:19:45.463361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.901 [2024-05-14 02:19:45.479069] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:30.901 [2024-05-14 02:19:45.479109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.901 [2024-05-14 02:19:45.479122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.159 [2024-05-14 02:19:45.496528] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:31.159 [2024-05-14 02:19:45.496566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.159 [2024-05-14 02:19:45.496580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.159 [2024-05-14 02:19:45.508786] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:31.159 [2024-05-14 02:19:45.508837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:11231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.159 [2024-05-14 02:19:45.508852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.160 [2024-05-14 02:19:45.525820] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:31.160 [2024-05-14 02:19:45.525867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.160 [2024-05-14 02:19:45.525882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.160 [2024-05-14 02:19:45.542222] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:31.160 [2024-05-14 02:19:45.542295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:16954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.160 [2024-05-14 02:19:45.542340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.160 [2024-05-14 02:19:45.557466] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:31.160 [2024-05-14 02:19:45.557523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:24874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.160 [2024-05-14 02:19:45.557537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.160 [2024-05-14 02:19:45.570557] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:31.160 [2024-05-14 02:19:45.570600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:22389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.160 [2024-05-14 02:19:45.570615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.160 [2024-05-14 02:19:45.584340] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:31.160 [2024-05-14 02:19:45.584431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:6679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.160 [2024-05-14 02:19:45.584445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.160 [2024-05-14 02:19:45.595615] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:31.160 [2024-05-14 02:19:45.595657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:13893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.160 [2024-05-14 02:19:45.595672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.160 [2024-05-14 02:19:45.609990] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:31.160 [2024-05-14 02:19:45.610041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.160 [2024-05-14 02:19:45.610055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.160 [2024-05-14 02:19:45.623760] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:31.160 [2024-05-14 02:19:45.623842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:17477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.160 [2024-05-14 02:19:45.623857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.160 [2024-05-14 02:19:45.638289] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:31.160 [2024-05-14 02:19:45.638331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:8175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.160 [2024-05-14 02:19:45.638346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.160 [2024-05-14 02:19:45.657565] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:31.160 [2024-05-14 02:19:45.657635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.160 [2024-05-14 02:19:45.657649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.160 [2024-05-14 02:19:45.675811] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:31.160 [2024-05-14 02:19:45.675864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.160 [2024-05-14 02:19:45.675878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.160 [2024-05-14 02:19:45.688729] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:31.160 [2024-05-14 02:19:45.688793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:15329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.160 [2024-05-14 02:19:45.688808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.160 [2024-05-14 02:19:45.706178] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:31.160 [2024-05-14 02:19:45.706218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:13335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.160 [2024-05-14 02:19:45.706232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.160 [2024-05-14 02:19:45.724900] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:31.160 [2024-05-14 02:19:45.724964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:1896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.160 [2024-05-14 02:19:45.724996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.160 [2024-05-14 02:19:45.743450] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:31.160 [2024-05-14 02:19:45.743491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:1046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.160 [2024-05-14 02:19:45.743506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.419 [2024-05-14 02:19:45.756157] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:31.419 [2024-05-14 02:19:45.756195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.419 [2024-05-14 02:19:45.756225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.419 [2024-05-14 02:19:45.771124] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:31.419 [2024-05-14 02:19:45.771163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.419 [2024-05-14 02:19:45.771193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.419 [2024-05-14 02:19:45.788115] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:31.419 [2024-05-14 02:19:45.788185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:11786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.419 [2024-05-14 02:19:45.788216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.419 [2024-05-14 02:19:45.800688] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:31.419 [2024-05-14 02:19:45.800727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.419 [2024-05-14 02:19:45.800741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.419 [2024-05-14 02:19:45.816019] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:31.419 [2024-05-14 02:19:45.816072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.419 [2024-05-14 02:19:45.816102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.419 [2024-05-14 02:19:45.832923] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:31.419 [2024-05-14 02:19:45.832992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:13200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.419 [2024-05-14 02:19:45.833036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.419 [2024-05-14 02:19:45.848868] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:31.419 [2024-05-14 02:19:45.848918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:16760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.419 [2024-05-14 02:19:45.848932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.419 [2024-05-14 02:19:45.863130] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:31.419 [2024-05-14 02:19:45.863186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.419 [2024-05-14 02:19:45.863200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.419 [2024-05-14 02:19:45.879811] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:31.419 [2024-05-14 02:19:45.879889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:18926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.419 [2024-05-14 02:19:45.879920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.419 [2024-05-14 02:19:45.897318] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:31.419 [2024-05-14 02:19:45.897389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.419 [2024-05-14 02:19:45.897418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.419 [2024-05-14 02:19:45.913912] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:31.419 [2024-05-14 02:19:45.913977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.419 [2024-05-14 02:19:45.913991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.419 [2024-05-14 02:19:45.930700] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:31.419 [2024-05-14 02:19:45.930754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:16077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.419 [2024-05-14 02:19:45.930779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.419 [2024-05-14 02:19:45.947977] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:31.419 [2024-05-14 02:19:45.948039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.419 [2024-05-14 02:19:45.948086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.419 [2024-05-14 02:19:45.964840] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:31.419 [2024-05-14 02:19:45.964904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:1099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.419 [2024-05-14 02:19:45.964918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.419 [2024-05-14 02:19:45.978536] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:31.419 [2024-05-14 02:19:45.978575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:20900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.419 [2024-05-14 02:19:45.978588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.419 [2024-05-14 02:19:45.991516] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:31.419 [2024-05-14 02:19:45.991569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.419 [2024-05-14 02:19:45.991600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.419 [2024-05-14 02:19:46.005984] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:31.419 [2024-05-14 02:19:46.006025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:15882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.419 [2024-05-14 02:19:46.006039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.679 [2024-05-14 02:19:46.020163] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:31.679 [2024-05-14 02:19:46.020248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:12108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.679 [2024-05-14 02:19:46.020262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.679 [2024-05-14 02:19:46.032019] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:31.679 [2024-05-14 02:19:46.032072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:24429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.679 [2024-05-14 02:19:46.032104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.679 [2024-05-14 02:19:46.047700] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:31.679 [2024-05-14 02:19:46.047755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.679 [2024-05-14 02:19:46.047795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.679 [2024-05-14 02:19:46.062736] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:31.679 [2024-05-14 02:19:46.062815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:1082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.679 [2024-05-14 02:19:46.062830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.679 [2024-05-14 02:19:46.075317] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:31.679 [2024-05-14 02:19:46.075401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.679 [2024-05-14 02:19:46.075431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.679 [2024-05-14 02:19:46.089131] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:31.679 [2024-05-14 02:19:46.089188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:20504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.679 [2024-05-14 02:19:46.089202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.679 [2024-05-14 02:19:46.105296] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:31.679 [2024-05-14 02:19:46.105350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:20781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.679 [2024-05-14 02:19:46.105380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.679 [2024-05-14 02:19:46.123003] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:31.679 [2024-05-14 02:19:46.123090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:14479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.679 [2024-05-14 02:19:46.123120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.679 [2024-05-14 02:19:46.135781] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:31.679 [2024-05-14 02:19:46.135859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.679 [2024-05-14 02:19:46.135889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.679 [2024-05-14 02:19:46.150145] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:31.679 [2024-05-14 02:19:46.150186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.679 [2024-05-14 02:19:46.150201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.679 [2024-05-14 02:19:46.163978] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:31.679 [2024-05-14 02:19:46.164031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.679 [2024-05-14 02:19:46.164061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.679 [2024-05-14 02:19:46.177456] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:31.679 [2024-05-14 02:19:46.177494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:11294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.679 [2024-05-14 02:19:46.177524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.679 [2024-05-14 02:19:46.191158] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:31.679 [2024-05-14 02:19:46.191229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:3004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.679 [2024-05-14 02:19:46.191258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.679 [2024-05-14 02:19:46.208907] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:31.679 [2024-05-14 02:19:46.208954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:20477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.679 [2024-05-14 02:19:46.208968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.679 [2024-05-14 02:19:46.221597] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:31.679 [2024-05-14 02:19:46.221636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.679 [2024-05-14 02:19:46.221667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.679 [2024-05-14 02:19:46.235927] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:31.679 [2024-05-14 02:19:46.235991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:7275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.679 [2024-05-14 02:19:46.236023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.679 [2024-05-14 02:19:46.251073] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:31.679 [2024-05-14 02:19:46.251128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:12492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.679 [2024-05-14 02:19:46.251157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.679 [2024-05-14 02:19:46.265091] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:31.679 [2024-05-14 02:19:46.265161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:5060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.679 [2024-05-14 02:19:46.265175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.938 [2024-05-14 02:19:46.279537] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:31.938 [2024-05-14 02:19:46.279606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:18696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.938 [2024-05-14 02:19:46.279640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.938 [2024-05-14 02:19:46.293334] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:31.938 [2024-05-14 02:19:46.293375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:21869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.938 [2024-05-14 02:19:46.293405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.938 [2024-05-14 02:19:46.306935] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:31.938 [2024-05-14 02:19:46.306976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:3338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.938 [2024-05-14 02:19:46.306990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.938 [2024-05-14 02:19:46.322037] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:31.938 [2024-05-14 02:19:46.322078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:11139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.938 [2024-05-14 02:19:46.322092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.938 [2024-05-14 02:19:46.340029] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:31.938 [2024-05-14 02:19:46.340082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.938 [2024-05-14 02:19:46.340111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.938 [2024-05-14 02:19:46.356270] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:31.938 [2024-05-14 02:19:46.356345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.938 [2024-05-14 02:19:46.356375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.938 [2024-05-14 02:19:46.370530] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:31.938 [2024-05-14 02:19:46.370585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:21610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.938 [2024-05-14 02:19:46.370615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.938 [2024-05-14 02:19:46.384861] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:31.939 [2024-05-14 02:19:46.384909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:9753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.939 [2024-05-14 02:19:46.384940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.939 [2024-05-14 02:19:46.399445] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:31.939 [2024-05-14 02:19:46.399546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.939 [2024-05-14 02:19:46.399576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.939 [2024-05-14 02:19:46.414641] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:31.939 [2024-05-14 02:19:46.414711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:24979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.939 [2024-05-14 02:19:46.414741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.939 [2024-05-14 02:19:46.428078] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:31.939 [2024-05-14 02:19:46.428124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:11468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.939 [2024-05-14 02:19:46.428171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.939 [2024-05-14 02:19:46.444652] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:31.939 [2024-05-14 02:19:46.444691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:25555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.939 [2024-05-14 02:19:46.444704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.939 [2024-05-14 02:19:46.461251] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:31.939 [2024-05-14 02:19:46.461289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:7198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.939 [2024-05-14 02:19:46.461302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.939 [2024-05-14 02:19:46.478947] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:31.939 [2024-05-14 02:19:46.479017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:11988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.939 [2024-05-14 02:19:46.479030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.939 [2024-05-14 02:19:46.496455] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:31.939 [2024-05-14 02:19:46.496508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.939 [2024-05-14 02:19:46.496538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.939 [2024-05-14 02:19:46.514669] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:31.939 [2024-05-14 02:19:46.514723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.939 [2024-05-14 02:19:46.514754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.198 [2024-05-14 02:19:46.532217] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:32.198 [2024-05-14 02:19:46.532271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.198 [2024-05-14 02:19:46.532302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.198 [2024-05-14 02:19:46.546863] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:32.198 [2024-05-14 02:19:46.546913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.198 [2024-05-14 02:19:46.546928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.198 [2024-05-14 02:19:46.560501] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:32.198 [2024-05-14 02:19:46.560554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:24013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.198 [2024-05-14 02:19:46.560567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.198 [2024-05-14 02:19:46.577355] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:32.198 [2024-05-14 02:19:46.577409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:12569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.198 [2024-05-14 02:19:46.577423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.198 [2024-05-14 02:19:46.594815] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:32.198 [2024-05-14 02:19:46.594866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:17736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.198 [2024-05-14 02:19:46.594881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.198 [2024-05-14 02:19:46.611579] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:32.198 [2024-05-14 02:19:46.611653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:16953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.198 [2024-05-14 02:19:46.611667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.198 [2024-05-14 02:19:46.627969] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:32.198 [2024-05-14 02:19:46.628009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.198 [2024-05-14 02:19:46.628024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.198 [2024-05-14 02:19:46.642992] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:32.198 [2024-05-14 02:19:46.643033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.199 [2024-05-14 02:19:46.643047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.199 [2024-05-14 02:19:46.656827] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:32.199 [2024-05-14 02:19:46.656911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:19999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.199 [2024-05-14 02:19:46.656927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.199 [2024-05-14 02:19:46.672534] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:32.199 [2024-05-14 02:19:46.672577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:14105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.199 [2024-05-14 02:19:46.672591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.199 [2024-05-14 02:19:46.687685] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:32.199 [2024-05-14 02:19:46.687756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:11737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.199 [2024-05-14 02:19:46.687782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.199 [2024-05-14 02:19:46.705021] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:32.199 [2024-05-14 02:19:46.705077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:7810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.199 [2024-05-14 02:19:46.705091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.199 [2024-05-14 02:19:46.719514] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:32.199 [2024-05-14 02:19:46.719605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:9545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.199 [2024-05-14 02:19:46.719619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.199 [2024-05-14 02:19:46.732399] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:32.199 [2024-05-14 02:19:46.732439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:10404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.199 [2024-05-14 02:19:46.732453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.199 [2024-05-14 02:19:46.745850] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:32.199 [2024-05-14 02:19:46.745926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.199 [2024-05-14 02:19:46.745942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.199 [2024-05-14 02:19:46.761059] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:32.199 [2024-05-14 02:19:46.761119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:5836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.199 [2024-05-14 02:19:46.761135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.199 [2024-05-14 02:19:46.776263] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992230) 00:22:32.199 [2024-05-14 02:19:46.776318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.199 [2024-05-14 02:19:46.776332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.199 00:22:32.199 Latency(us) 00:22:32.199 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:32.199 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:32.199 nvme0n1 : 2.01 16391.39 64.03 0.00 0.00 7800.82 3693.85 24188.74 00:22:32.199 =================================================================================================================== 00:22:32.199 Total : 16391.39 64.03 0.00 0.00 7800.82 3693.85 24188.74 00:22:32.199 0 00:22:32.458 02:19:46 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:32.458 02:19:46 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:32.458 02:19:46 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:32.458 02:19:46 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:32.458 | .driver_specific 00:22:32.458 | .nvme_error 00:22:32.458 | .status_code 00:22:32.458 | .command_transient_transport_error' 00:22:32.717 02:19:47 -- host/digest.sh@71 -- # (( 129 > 0 )) 00:22:32.717 02:19:47 -- host/digest.sh@73 -- # killprocess 84815 00:22:32.717 02:19:47 -- common/autotest_common.sh@926 -- # '[' -z 84815 ']' 00:22:32.717 02:19:47 -- common/autotest_common.sh@930 -- # kill -0 84815 00:22:32.717 02:19:47 -- common/autotest_common.sh@931 -- # uname 00:22:32.717 02:19:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:32.717 02:19:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 84815 00:22:32.717 02:19:47 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:32.717 02:19:47 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:32.717 killing process with pid 84815 00:22:32.717 02:19:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 84815' 00:22:32.717 02:19:47 -- common/autotest_common.sh@945 -- # kill 84815 00:22:32.717 Received shutdown signal, test time was about 2.000000 seconds 00:22:32.717 00:22:32.717 Latency(us) 00:22:32.717 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:32.717 =================================================================================================================== 00:22:32.717 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:32.717 02:19:47 -- common/autotest_common.sh@950 -- # wait 84815 00:22:32.977 02:19:47 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:22:32.977 02:19:47 -- host/digest.sh@54 -- # local rw bs qd 00:22:32.977 02:19:47 -- host/digest.sh@56 -- # rw=randread 00:22:32.977 02:19:47 -- host/digest.sh@56 -- # bs=131072 00:22:32.977 02:19:47 -- host/digest.sh@56 -- # qd=16 00:22:32.977 02:19:47 -- host/digest.sh@58 -- # bperfpid=84902 00:22:32.977 02:19:47 -- host/digest.sh@60 -- # waitforlisten 84902 /var/tmp/bperf.sock 00:22:32.977 02:19:47 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:22:32.977 02:19:47 -- common/autotest_common.sh@819 -- # '[' -z 84902 ']' 00:22:32.977 02:19:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:32.977 02:19:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:32.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:32.977 02:19:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:32.977 02:19:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:32.977 02:19:47 -- common/autotest_common.sh@10 -- # set +x 00:22:32.977 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:32.977 Zero copy mechanism will not be used. 00:22:32.977 [2024-05-14 02:19:47.381060] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:32.977 [2024-05-14 02:19:47.381158] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84902 ] 00:22:32.977 [2024-05-14 02:19:47.522484] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:33.236 [2024-05-14 02:19:47.587301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:33.804 02:19:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:33.804 02:19:48 -- common/autotest_common.sh@852 -- # return 0 00:22:33.804 02:19:48 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:34.063 02:19:48 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:34.063 02:19:48 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:34.063 02:19:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:34.063 02:19:48 -- common/autotest_common.sh@10 -- # set +x 00:22:34.063 02:19:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:34.063 02:19:48 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:34.063 02:19:48 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:34.632 nvme0n1 00:22:34.632 02:19:48 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:22:34.632 02:19:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:34.632 02:19:48 -- common/autotest_common.sh@10 -- # set +x 00:22:34.632 02:19:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:34.632 02:19:49 -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:34.632 02:19:49 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:34.632 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:34.632 Zero copy mechanism will not be used. 00:22:34.632 Running I/O for 2 seconds... 00:22:34.632 [2024-05-14 02:19:49.139850] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.632 [2024-05-14 02:19:49.139995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.632 [2024-05-14 02:19:49.140012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.632 [2024-05-14 02:19:49.144968] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.632 [2024-05-14 02:19:49.145045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.632 [2024-05-14 02:19:49.145060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.632 [2024-05-14 02:19:49.149009] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.632 [2024-05-14 02:19:49.149048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.632 [2024-05-14 02:19:49.149062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.632 [2024-05-14 02:19:49.154043] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.632 [2024-05-14 02:19:49.154084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.632 [2024-05-14 02:19:49.154098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.632 [2024-05-14 02:19:49.158241] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.632 [2024-05-14 02:19:49.158312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.632 [2024-05-14 02:19:49.158341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.632 [2024-05-14 02:19:49.162756] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.632 [2024-05-14 02:19:49.162820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.632 [2024-05-14 02:19:49.162834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.632 [2024-05-14 02:19:49.167418] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.632 [2024-05-14 02:19:49.167458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.633 [2024-05-14 02:19:49.167473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.633 [2024-05-14 02:19:49.171664] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.633 [2024-05-14 02:19:49.171717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.633 [2024-05-14 02:19:49.171730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.633 [2024-05-14 02:19:49.176031] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.633 [2024-05-14 02:19:49.176086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.633 [2024-05-14 02:19:49.176115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.633 [2024-05-14 02:19:49.181204] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.633 [2024-05-14 02:19:49.181258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.633 [2024-05-14 02:19:49.181272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.633 [2024-05-14 02:19:49.186192] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.633 [2024-05-14 02:19:49.186234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.633 [2024-05-14 02:19:49.186248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.633 [2024-05-14 02:19:49.190880] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.633 [2024-05-14 02:19:49.190947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.633 [2024-05-14 02:19:49.190961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.633 [2024-05-14 02:19:49.195996] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.633 [2024-05-14 02:19:49.196063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.633 [2024-05-14 02:19:49.196078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.633 [2024-05-14 02:19:49.200407] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.633 [2024-05-14 02:19:49.200492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.633 [2024-05-14 02:19:49.200507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.633 [2024-05-14 02:19:49.204182] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.633 [2024-05-14 02:19:49.204236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.633 [2024-05-14 02:19:49.204249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.633 [2024-05-14 02:19:49.209179] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.633 [2024-05-14 02:19:49.209217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.633 [2024-05-14 02:19:49.209231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.633 [2024-05-14 02:19:49.213795] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.633 [2024-05-14 02:19:49.213875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.633 [2024-05-14 02:19:49.213890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.633 [2024-05-14 02:19:49.218360] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.633 [2024-05-14 02:19:49.218400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.633 [2024-05-14 02:19:49.218414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.893 [2024-05-14 02:19:49.222466] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.893 [2024-05-14 02:19:49.222523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.893 [2024-05-14 02:19:49.222537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.893 [2024-05-14 02:19:49.227431] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.893 [2024-05-14 02:19:49.227487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.893 [2024-05-14 02:19:49.227501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.893 [2024-05-14 02:19:49.232124] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.893 [2024-05-14 02:19:49.232164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.893 [2024-05-14 02:19:49.232179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.893 [2024-05-14 02:19:49.237159] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.893 [2024-05-14 02:19:49.237200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.893 [2024-05-14 02:19:49.237214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.893 [2024-05-14 02:19:49.241253] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.893 [2024-05-14 02:19:49.241342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.893 [2024-05-14 02:19:49.241356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.893 [2024-05-14 02:19:49.245606] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.893 [2024-05-14 02:19:49.245644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.893 [2024-05-14 02:19:49.245657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.893 [2024-05-14 02:19:49.250444] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.893 [2024-05-14 02:19:49.250499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.893 [2024-05-14 02:19:49.250513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.893 [2024-05-14 02:19:49.254439] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.893 [2024-05-14 02:19:49.254480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.893 [2024-05-14 02:19:49.254495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.893 [2024-05-14 02:19:49.258729] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.893 [2024-05-14 02:19:49.258780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.893 [2024-05-14 02:19:49.258795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.893 [2024-05-14 02:19:49.263795] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.893 [2024-05-14 02:19:49.263841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.894 [2024-05-14 02:19:49.263854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.894 [2024-05-14 02:19:49.267974] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.894 [2024-05-14 02:19:49.268027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.894 [2024-05-14 02:19:49.268041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.894 [2024-05-14 02:19:49.272529] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.894 [2024-05-14 02:19:49.272569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.894 [2024-05-14 02:19:49.272582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.894 [2024-05-14 02:19:49.276349] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.894 [2024-05-14 02:19:49.276404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.894 [2024-05-14 02:19:49.276418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.894 [2024-05-14 02:19:49.280714] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.894 [2024-05-14 02:19:49.280782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.894 [2024-05-14 02:19:49.280797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.894 [2024-05-14 02:19:49.284745] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.894 [2024-05-14 02:19:49.284797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.894 [2024-05-14 02:19:49.284812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.894 [2024-05-14 02:19:49.288522] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.894 [2024-05-14 02:19:49.288591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.894 [2024-05-14 02:19:49.288605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.894 [2024-05-14 02:19:49.292391] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.894 [2024-05-14 02:19:49.292429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.894 [2024-05-14 02:19:49.292443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.894 [2024-05-14 02:19:49.296412] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.894 [2024-05-14 02:19:49.296482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.894 [2024-05-14 02:19:49.296496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.894 [2024-05-14 02:19:49.301058] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.894 [2024-05-14 02:19:49.301098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.894 [2024-05-14 02:19:49.301111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.894 [2024-05-14 02:19:49.305258] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.894 [2024-05-14 02:19:49.305343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.894 [2024-05-14 02:19:49.305357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.894 [2024-05-14 02:19:49.309177] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.894 [2024-05-14 02:19:49.309213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.894 [2024-05-14 02:19:49.309225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.894 [2024-05-14 02:19:49.313268] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.894 [2024-05-14 02:19:49.313324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.894 [2024-05-14 02:19:49.313338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.894 [2024-05-14 02:19:49.318048] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.894 [2024-05-14 02:19:49.318088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.894 [2024-05-14 02:19:49.318102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.894 [2024-05-14 02:19:49.322828] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.894 [2024-05-14 02:19:49.322894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.894 [2024-05-14 02:19:49.322909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.894 [2024-05-14 02:19:49.327397] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.894 [2024-05-14 02:19:49.327455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.894 [2024-05-14 02:19:49.327484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.894 [2024-05-14 02:19:49.332123] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.894 [2024-05-14 02:19:49.332161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.894 [2024-05-14 02:19:49.332174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.894 [2024-05-14 02:19:49.335245] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.894 [2024-05-14 02:19:49.335298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.894 [2024-05-14 02:19:49.335312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.894 [2024-05-14 02:19:49.339855] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.894 [2024-05-14 02:19:49.339981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.894 [2024-05-14 02:19:49.339995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.894 [2024-05-14 02:19:49.344129] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.894 [2024-05-14 02:19:49.344182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.894 [2024-05-14 02:19:49.344211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.894 [2024-05-14 02:19:49.348625] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.894 [2024-05-14 02:19:49.348681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.894 [2024-05-14 02:19:49.348696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.894 [2024-05-14 02:19:49.353753] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.894 [2024-05-14 02:19:49.353846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.894 [2024-05-14 02:19:49.353876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.894 [2024-05-14 02:19:49.357825] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.894 [2024-05-14 02:19:49.357869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.894 [2024-05-14 02:19:49.357899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.894 [2024-05-14 02:19:49.362423] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.894 [2024-05-14 02:19:49.362475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.894 [2024-05-14 02:19:49.362489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.894 [2024-05-14 02:19:49.367387] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.894 [2024-05-14 02:19:49.367454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.894 [2024-05-14 02:19:49.367467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.894 [2024-05-14 02:19:49.372013] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.894 [2024-05-14 02:19:49.372102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.894 [2024-05-14 02:19:49.372115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.894 [2024-05-14 02:19:49.376943] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.894 [2024-05-14 02:19:49.377013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.894 [2024-05-14 02:19:49.377026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.894 [2024-05-14 02:19:49.381562] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.894 [2024-05-14 02:19:49.381632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.894 [2024-05-14 02:19:49.381645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.894 [2024-05-14 02:19:49.385847] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.894 [2024-05-14 02:19:49.385895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.895 [2024-05-14 02:19:49.385909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.895 [2024-05-14 02:19:49.390454] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.895 [2024-05-14 02:19:49.390492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.895 [2024-05-14 02:19:49.390506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.895 [2024-05-14 02:19:49.394465] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.895 [2024-05-14 02:19:49.394521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.895 [2024-05-14 02:19:49.394535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.895 [2024-05-14 02:19:49.398980] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.895 [2024-05-14 02:19:49.399036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.895 [2024-05-14 02:19:49.399050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.895 [2024-05-14 02:19:49.403597] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.895 [2024-05-14 02:19:49.403640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.895 [2024-05-14 02:19:49.403659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.895 [2024-05-14 02:19:49.408030] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.895 [2024-05-14 02:19:49.408101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.895 [2024-05-14 02:19:49.408116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.895 [2024-05-14 02:19:49.412681] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.895 [2024-05-14 02:19:49.412721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.895 [2024-05-14 02:19:49.412736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.895 [2024-05-14 02:19:49.417220] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.895 [2024-05-14 02:19:49.417261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.895 [2024-05-14 02:19:49.417275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.895 [2024-05-14 02:19:49.421421] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.895 [2024-05-14 02:19:49.421461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.895 [2024-05-14 02:19:49.421475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.895 [2024-05-14 02:19:49.425394] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.895 [2024-05-14 02:19:49.425448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.895 [2024-05-14 02:19:49.425462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.895 [2024-05-14 02:19:49.429833] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.895 [2024-05-14 02:19:49.429871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.895 [2024-05-14 02:19:49.429886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.895 [2024-05-14 02:19:49.434129] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.895 [2024-05-14 02:19:49.434168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.895 [2024-05-14 02:19:49.434182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.895 [2024-05-14 02:19:49.438319] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.895 [2024-05-14 02:19:49.438359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.895 [2024-05-14 02:19:49.438373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.895 [2024-05-14 02:19:49.443061] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.895 [2024-05-14 02:19:49.443101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.895 [2024-05-14 02:19:49.443115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.895 [2024-05-14 02:19:49.447555] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.895 [2024-05-14 02:19:49.447596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.895 [2024-05-14 02:19:49.447611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.895 [2024-05-14 02:19:49.452529] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.895 [2024-05-14 02:19:49.452583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.895 [2024-05-14 02:19:49.452626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.895 [2024-05-14 02:19:49.457035] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.895 [2024-05-14 02:19:49.457089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.895 [2024-05-14 02:19:49.457118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.895 [2024-05-14 02:19:49.461448] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.895 [2024-05-14 02:19:49.461504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.895 [2024-05-14 02:19:49.461533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.895 [2024-05-14 02:19:49.466155] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.895 [2024-05-14 02:19:49.466197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.895 [2024-05-14 02:19:49.466210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.895 [2024-05-14 02:19:49.471400] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.895 [2024-05-14 02:19:49.471439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.895 [2024-05-14 02:19:49.471468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.895 [2024-05-14 02:19:49.476499] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:34.895 [2024-05-14 02:19:49.476572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.895 [2024-05-14 02:19:49.476587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.156 [2024-05-14 02:19:49.480681] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.156 [2024-05-14 02:19:49.480720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.156 [2024-05-14 02:19:49.480749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.156 [2024-05-14 02:19:49.484463] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.156 [2024-05-14 02:19:49.484516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.156 [2024-05-14 02:19:49.484529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.157 [2024-05-14 02:19:49.489659] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.157 [2024-05-14 02:19:49.489731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.157 [2024-05-14 02:19:49.489745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.157 [2024-05-14 02:19:49.493996] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.157 [2024-05-14 02:19:49.494037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.157 [2024-05-14 02:19:49.494052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.157 [2024-05-14 02:19:49.498502] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.157 [2024-05-14 02:19:49.498557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.157 [2024-05-14 02:19:49.498571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.157 [2024-05-14 02:19:49.502740] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.157 [2024-05-14 02:19:49.502836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.157 [2024-05-14 02:19:49.502851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.157 [2024-05-14 02:19:49.507208] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.157 [2024-05-14 02:19:49.507263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.157 [2024-05-14 02:19:49.507277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.157 [2024-05-14 02:19:49.511305] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.157 [2024-05-14 02:19:49.511361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.157 [2024-05-14 02:19:49.511391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.157 [2024-05-14 02:19:49.515603] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.157 [2024-05-14 02:19:49.515657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.157 [2024-05-14 02:19:49.515685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.157 [2024-05-14 02:19:49.519754] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.157 [2024-05-14 02:19:49.519815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.157 [2024-05-14 02:19:49.519828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.157 [2024-05-14 02:19:49.523851] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.157 [2024-05-14 02:19:49.523934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.157 [2024-05-14 02:19:49.523965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.157 [2024-05-14 02:19:49.527764] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.157 [2024-05-14 02:19:49.527877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.157 [2024-05-14 02:19:49.527891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.157 [2024-05-14 02:19:49.531946] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.157 [2024-05-14 02:19:49.532013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.157 [2024-05-14 02:19:49.532027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.157 [2024-05-14 02:19:49.536706] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.157 [2024-05-14 02:19:49.536787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.157 [2024-05-14 02:19:49.536802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.157 [2024-05-14 02:19:49.541378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.157 [2024-05-14 02:19:49.541415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.157 [2024-05-14 02:19:49.541428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.157 [2024-05-14 02:19:49.545641] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.157 [2024-05-14 02:19:49.545694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.157 [2024-05-14 02:19:49.545707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.157 [2024-05-14 02:19:49.549294] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.157 [2024-05-14 02:19:49.549347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.157 [2024-05-14 02:19:49.549376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.157 [2024-05-14 02:19:49.553695] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.157 [2024-05-14 02:19:49.553748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.157 [2024-05-14 02:19:49.553822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.157 [2024-05-14 02:19:49.558246] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.157 [2024-05-14 02:19:49.558287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.157 [2024-05-14 02:19:49.558301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.157 [2024-05-14 02:19:49.562685] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.157 [2024-05-14 02:19:49.562740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.157 [2024-05-14 02:19:49.562753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.157 [2024-05-14 02:19:49.567469] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.157 [2024-05-14 02:19:49.567523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.157 [2024-05-14 02:19:49.567553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.157 [2024-05-14 02:19:49.571890] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.157 [2024-05-14 02:19:49.571946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.157 [2024-05-14 02:19:49.571960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.157 [2024-05-14 02:19:49.576464] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.157 [2024-05-14 02:19:49.576535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.157 [2024-05-14 02:19:49.576564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.157 [2024-05-14 02:19:49.581352] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.157 [2024-05-14 02:19:49.581406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.157 [2024-05-14 02:19:49.581436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.157 [2024-05-14 02:19:49.585850] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.157 [2024-05-14 02:19:49.585914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.157 [2024-05-14 02:19:49.585955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.157 [2024-05-14 02:19:49.589907] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.157 [2024-05-14 02:19:49.589955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.157 [2024-05-14 02:19:49.589969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.157 [2024-05-14 02:19:49.593198] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.157 [2024-05-14 02:19:49.593235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.157 [2024-05-14 02:19:49.593249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.157 [2024-05-14 02:19:49.597836] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.158 [2024-05-14 02:19:49.597885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.158 [2024-05-14 02:19:49.597900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.158 [2024-05-14 02:19:49.602156] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.158 [2024-05-14 02:19:49.602195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.158 [2024-05-14 02:19:49.602209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.158 [2024-05-14 02:19:49.606147] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.158 [2024-05-14 02:19:49.606186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.158 [2024-05-14 02:19:49.606199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.158 [2024-05-14 02:19:49.610797] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.158 [2024-05-14 02:19:49.610850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.158 [2024-05-14 02:19:49.610864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.158 [2024-05-14 02:19:49.614921] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.158 [2024-05-14 02:19:49.615002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.158 [2024-05-14 02:19:49.615016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.158 [2024-05-14 02:19:49.619656] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.158 [2024-05-14 02:19:49.619739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.158 [2024-05-14 02:19:49.619767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.158 [2024-05-14 02:19:49.625264] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.158 [2024-05-14 02:19:49.625336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.158 [2024-05-14 02:19:49.625364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.158 [2024-05-14 02:19:49.630024] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.158 [2024-05-14 02:19:49.630064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.158 [2024-05-14 02:19:49.630078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.158 [2024-05-14 02:19:49.634489] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.158 [2024-05-14 02:19:49.634555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.158 [2024-05-14 02:19:49.634584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.158 [2024-05-14 02:19:49.638572] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.158 [2024-05-14 02:19:49.638626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.158 [2024-05-14 02:19:49.638670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.158 [2024-05-14 02:19:49.642957] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.158 [2024-05-14 02:19:49.642997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.158 [2024-05-14 02:19:49.643011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.158 [2024-05-14 02:19:49.646750] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.158 [2024-05-14 02:19:49.646816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.158 [2024-05-14 02:19:49.646845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.158 [2024-05-14 02:19:49.650996] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.158 [2024-05-14 02:19:49.651036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.158 [2024-05-14 02:19:49.651050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.158 [2024-05-14 02:19:49.654752] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.158 [2024-05-14 02:19:49.654802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.158 [2024-05-14 02:19:49.654816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.158 [2024-05-14 02:19:49.658463] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.158 [2024-05-14 02:19:49.658501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.158 [2024-05-14 02:19:49.658514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.158 [2024-05-14 02:19:49.663230] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.158 [2024-05-14 02:19:49.663283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.158 [2024-05-14 02:19:49.663312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.158 [2024-05-14 02:19:49.668096] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.158 [2024-05-14 02:19:49.668150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.158 [2024-05-14 02:19:49.668163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.158 [2024-05-14 02:19:49.672432] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.158 [2024-05-14 02:19:49.672484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.158 [2024-05-14 02:19:49.672512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.158 [2024-05-14 02:19:49.677069] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.158 [2024-05-14 02:19:49.677122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.158 [2024-05-14 02:19:49.677136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.158 [2024-05-14 02:19:49.681414] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.158 [2024-05-14 02:19:49.681452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.158 [2024-05-14 02:19:49.681481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.158 [2024-05-14 02:19:49.685616] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.158 [2024-05-14 02:19:49.685654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.158 [2024-05-14 02:19:49.685667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.158 [2024-05-14 02:19:49.690044] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.158 [2024-05-14 02:19:49.690084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.158 [2024-05-14 02:19:49.690098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.158 [2024-05-14 02:19:49.694024] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.158 [2024-05-14 02:19:49.694078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.158 [2024-05-14 02:19:49.694092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.158 [2024-05-14 02:19:49.698722] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.158 [2024-05-14 02:19:49.698759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.158 [2024-05-14 02:19:49.698800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.158 [2024-05-14 02:19:49.703029] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.159 [2024-05-14 02:19:49.703081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.159 [2024-05-14 02:19:49.703110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.159 [2024-05-14 02:19:49.707465] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.159 [2024-05-14 02:19:49.707517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.159 [2024-05-14 02:19:49.707531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.159 [2024-05-14 02:19:49.712081] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.159 [2024-05-14 02:19:49.712137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.159 [2024-05-14 02:19:49.712151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.159 [2024-05-14 02:19:49.716641] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.159 [2024-05-14 02:19:49.716694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.159 [2024-05-14 02:19:49.716722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.159 [2024-05-14 02:19:49.720845] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.159 [2024-05-14 02:19:49.720892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.159 [2024-05-14 02:19:49.720905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.159 [2024-05-14 02:19:49.724583] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.159 [2024-05-14 02:19:49.724637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.159 [2024-05-14 02:19:49.724667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.159 [2024-05-14 02:19:49.730218] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.159 [2024-05-14 02:19:49.730304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.159 [2024-05-14 02:19:49.730333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.159 [2024-05-14 02:19:49.734628] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.159 [2024-05-14 02:19:49.734683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.159 [2024-05-14 02:19:49.734713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.159 [2024-05-14 02:19:49.738751] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.159 [2024-05-14 02:19:49.738863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.159 [2024-05-14 02:19:49.738877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.159 [2024-05-14 02:19:49.742992] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.159 [2024-05-14 02:19:49.743045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.159 [2024-05-14 02:19:49.743074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.421 [2024-05-14 02:19:49.747036] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.421 [2024-05-14 02:19:49.747090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.421 [2024-05-14 02:19:49.747103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.421 [2024-05-14 02:19:49.752138] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.421 [2024-05-14 02:19:49.752193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.421 [2024-05-14 02:19:49.752207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.421 [2024-05-14 02:19:49.756611] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.421 [2024-05-14 02:19:49.756680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.421 [2024-05-14 02:19:49.756709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.421 [2024-05-14 02:19:49.760741] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.421 [2024-05-14 02:19:49.760820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.421 [2024-05-14 02:19:49.760835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.421 [2024-05-14 02:19:49.765384] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.421 [2024-05-14 02:19:49.765421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.421 [2024-05-14 02:19:49.765449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.421 [2024-05-14 02:19:49.769766] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.421 [2024-05-14 02:19:49.769830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.421 [2024-05-14 02:19:49.769845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.421 [2024-05-14 02:19:49.772678] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.421 [2024-05-14 02:19:49.772715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.421 [2024-05-14 02:19:49.772745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.421 [2024-05-14 02:19:49.776467] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.421 [2024-05-14 02:19:49.776520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.421 [2024-05-14 02:19:49.776549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.421 [2024-05-14 02:19:49.780848] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.421 [2024-05-14 02:19:49.780916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.421 [2024-05-14 02:19:49.780930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.421 [2024-05-14 02:19:49.785654] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.421 [2024-05-14 02:19:49.785709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.421 [2024-05-14 02:19:49.785738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.421 [2024-05-14 02:19:49.790059] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.421 [2024-05-14 02:19:49.790098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.421 [2024-05-14 02:19:49.790112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.421 [2024-05-14 02:19:49.795202] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.421 [2024-05-14 02:19:49.795257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.421 [2024-05-14 02:19:49.795286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.421 [2024-05-14 02:19:49.799889] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.421 [2024-05-14 02:19:49.799935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.421 [2024-05-14 02:19:49.799965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.421 [2024-05-14 02:19:49.804035] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.421 [2024-05-14 02:19:49.804103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.421 [2024-05-14 02:19:49.804132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.421 [2024-05-14 02:19:49.808549] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.421 [2024-05-14 02:19:49.808602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.421 [2024-05-14 02:19:49.808631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.421 [2024-05-14 02:19:49.813824] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.421 [2024-05-14 02:19:49.813868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.421 [2024-05-14 02:19:49.813880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.421 [2024-05-14 02:19:49.818447] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.421 [2024-05-14 02:19:49.818500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.421 [2024-05-14 02:19:49.818513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.421 [2024-05-14 02:19:49.821591] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.421 [2024-05-14 02:19:49.821645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.421 [2024-05-14 02:19:49.821674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.421 [2024-05-14 02:19:49.825432] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.421 [2024-05-14 02:19:49.825515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.421 [2024-05-14 02:19:49.825528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.421 [2024-05-14 02:19:49.830384] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.421 [2024-05-14 02:19:49.830437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.421 [2024-05-14 02:19:49.830450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.421 [2024-05-14 02:19:49.834866] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.421 [2024-05-14 02:19:49.834912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.421 [2024-05-14 02:19:49.834941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.421 [2024-05-14 02:19:49.839738] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.422 [2024-05-14 02:19:49.839816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.422 [2024-05-14 02:19:49.839830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.422 [2024-05-14 02:19:49.844651] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.422 [2024-05-14 02:19:49.844721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.422 [2024-05-14 02:19:49.844750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.422 [2024-05-14 02:19:49.848986] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.422 [2024-05-14 02:19:49.849039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.422 [2024-05-14 02:19:49.849069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.422 [2024-05-14 02:19:49.854051] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.422 [2024-05-14 02:19:49.854091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.422 [2024-05-14 02:19:49.854107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.422 [2024-05-14 02:19:49.858572] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.422 [2024-05-14 02:19:49.858642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.422 [2024-05-14 02:19:49.858672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.422 [2024-05-14 02:19:49.862641] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.422 [2024-05-14 02:19:49.862697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.422 [2024-05-14 02:19:49.862741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.422 [2024-05-14 02:19:49.866679] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.422 [2024-05-14 02:19:49.866734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.422 [2024-05-14 02:19:49.866747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.422 [2024-05-14 02:19:49.870953] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.422 [2024-05-14 02:19:49.871048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.422 [2024-05-14 02:19:49.871062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.422 [2024-05-14 02:19:49.875631] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.422 [2024-05-14 02:19:49.875704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.422 [2024-05-14 02:19:49.875717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.422 [2024-05-14 02:19:49.879710] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.422 [2024-05-14 02:19:49.879791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.422 [2024-05-14 02:19:49.879806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.422 [2024-05-14 02:19:49.884268] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.422 [2024-05-14 02:19:49.884322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.422 [2024-05-14 02:19:49.884351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.422 [2024-05-14 02:19:49.888139] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.422 [2024-05-14 02:19:49.888192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.422 [2024-05-14 02:19:49.888221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.422 [2024-05-14 02:19:49.892587] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.422 [2024-05-14 02:19:49.892642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.422 [2024-05-14 02:19:49.892672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.422 [2024-05-14 02:19:49.897229] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.422 [2024-05-14 02:19:49.897283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.422 [2024-05-14 02:19:49.897312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.422 [2024-05-14 02:19:49.902017] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.422 [2024-05-14 02:19:49.902058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.422 [2024-05-14 02:19:49.902072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.422 [2024-05-14 02:19:49.905868] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.422 [2024-05-14 02:19:49.905964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.422 [2024-05-14 02:19:49.905979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.422 [2024-05-14 02:19:49.909881] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.422 [2024-05-14 02:19:49.909975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.422 [2024-05-14 02:19:49.909992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.422 [2024-05-14 02:19:49.914475] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.422 [2024-05-14 02:19:49.914529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.422 [2024-05-14 02:19:49.914557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.422 [2024-05-14 02:19:49.918503] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.422 [2024-05-14 02:19:49.918552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.422 [2024-05-14 02:19:49.918565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.422 [2024-05-14 02:19:49.922584] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.422 [2024-05-14 02:19:49.922620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.422 [2024-05-14 02:19:49.922648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.422 [2024-05-14 02:19:49.927066] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.422 [2024-05-14 02:19:49.927103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.422 [2024-05-14 02:19:49.927131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.422 [2024-05-14 02:19:49.931301] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.422 [2024-05-14 02:19:49.931354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.422 [2024-05-14 02:19:49.931382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.422 [2024-05-14 02:19:49.935057] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.422 [2024-05-14 02:19:49.935128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.422 [2024-05-14 02:19:49.935143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.422 [2024-05-14 02:19:49.939109] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.422 [2024-05-14 02:19:49.939146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.422 [2024-05-14 02:19:49.939174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.422 [2024-05-14 02:19:49.943459] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.422 [2024-05-14 02:19:49.943514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.422 [2024-05-14 02:19:49.943528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.422 [2024-05-14 02:19:49.947169] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.423 [2024-05-14 02:19:49.947207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.423 [2024-05-14 02:19:49.947235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.423 [2024-05-14 02:19:49.951083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.423 [2024-05-14 02:19:49.951120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.423 [2024-05-14 02:19:49.951150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.423 [2024-05-14 02:19:49.956205] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.423 [2024-05-14 02:19:49.956260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.423 [2024-05-14 02:19:49.956289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.423 [2024-05-14 02:19:49.960382] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.423 [2024-05-14 02:19:49.960420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.423 [2024-05-14 02:19:49.960448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.423 [2024-05-14 02:19:49.965216] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.423 [2024-05-14 02:19:49.965254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.423 [2024-05-14 02:19:49.965268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.423 [2024-05-14 02:19:49.969375] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.423 [2024-05-14 02:19:49.969449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.423 [2024-05-14 02:19:49.969478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.423 [2024-05-14 02:19:49.974083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.423 [2024-05-14 02:19:49.974123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.423 [2024-05-14 02:19:49.974136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.423 [2024-05-14 02:19:49.978790] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.423 [2024-05-14 02:19:49.978869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.423 [2024-05-14 02:19:49.978884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.423 [2024-05-14 02:19:49.983778] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.423 [2024-05-14 02:19:49.983841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.423 [2024-05-14 02:19:49.983855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.423 [2024-05-14 02:19:49.988540] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.423 [2024-05-14 02:19:49.988593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.423 [2024-05-14 02:19:49.988622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.423 [2024-05-14 02:19:49.993402] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.423 [2024-05-14 02:19:49.993457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.423 [2024-05-14 02:19:49.993487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.423 [2024-05-14 02:19:49.998136] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.423 [2024-05-14 02:19:49.998177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.423 [2024-05-14 02:19:49.998191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.423 [2024-05-14 02:19:50.002626] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.423 [2024-05-14 02:19:50.002666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.423 [2024-05-14 02:19:50.002681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.423 [2024-05-14 02:19:50.006565] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.423 [2024-05-14 02:19:50.006604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.423 [2024-05-14 02:19:50.006618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.684 [2024-05-14 02:19:50.010486] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.684 [2024-05-14 02:19:50.010557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.684 [2024-05-14 02:19:50.010570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.684 [2024-05-14 02:19:50.014988] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.684 [2024-05-14 02:19:50.015029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.684 [2024-05-14 02:19:50.015043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.684 [2024-05-14 02:19:50.018807] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.684 [2024-05-14 02:19:50.018843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.684 [2024-05-14 02:19:50.018856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.684 [2024-05-14 02:19:50.022525] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.684 [2024-05-14 02:19:50.022564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.684 [2024-05-14 02:19:50.022578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.684 [2024-05-14 02:19:50.027204] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.684 [2024-05-14 02:19:50.027260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.684 [2024-05-14 02:19:50.027274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.684 [2024-05-14 02:19:50.031703] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.684 [2024-05-14 02:19:50.031756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.684 [2024-05-14 02:19:50.031783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.684 [2024-05-14 02:19:50.035829] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.684 [2024-05-14 02:19:50.035892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.684 [2024-05-14 02:19:50.035906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.684 [2024-05-14 02:19:50.040525] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.684 [2024-05-14 02:19:50.040610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.684 [2024-05-14 02:19:50.040648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.684 [2024-05-14 02:19:50.045053] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.684 [2024-05-14 02:19:50.045091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.684 [2024-05-14 02:19:50.045104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.684 [2024-05-14 02:19:50.049527] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.684 [2024-05-14 02:19:50.049564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.684 [2024-05-14 02:19:50.049577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.684 [2024-05-14 02:19:50.053599] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.684 [2024-05-14 02:19:50.053656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.684 [2024-05-14 02:19:50.053669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.684 [2024-05-14 02:19:50.058264] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.684 [2024-05-14 02:19:50.058331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.684 [2024-05-14 02:19:50.058344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.684 [2024-05-14 02:19:50.062906] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.684 [2024-05-14 02:19:50.062990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.684 [2024-05-14 02:19:50.063004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.684 [2024-05-14 02:19:50.066798] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.684 [2024-05-14 02:19:50.066847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.684 [2024-05-14 02:19:50.066861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.684 [2024-05-14 02:19:50.071438] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.684 [2024-05-14 02:19:50.071475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.684 [2024-05-14 02:19:50.071489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.684 [2024-05-14 02:19:50.075793] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.684 [2024-05-14 02:19:50.075888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.684 [2024-05-14 02:19:50.075902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.684 [2024-05-14 02:19:50.080675] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.684 [2024-05-14 02:19:50.080727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.684 [2024-05-14 02:19:50.080739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.684 [2024-05-14 02:19:50.085484] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.684 [2024-05-14 02:19:50.085523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.684 [2024-05-14 02:19:50.085536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.684 [2024-05-14 02:19:50.090389] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.684 [2024-05-14 02:19:50.090472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.684 [2024-05-14 02:19:50.090485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.684 [2024-05-14 02:19:50.095170] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.684 [2024-05-14 02:19:50.095223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.684 [2024-05-14 02:19:50.095236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.684 [2024-05-14 02:19:50.099568] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.684 [2024-05-14 02:19:50.099638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.684 [2024-05-14 02:19:50.099666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.684 [2024-05-14 02:19:50.104347] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.684 [2024-05-14 02:19:50.104414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.684 [2024-05-14 02:19:50.104427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.684 [2024-05-14 02:19:50.108559] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.684 [2024-05-14 02:19:50.108628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.684 [2024-05-14 02:19:50.108641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.684 [2024-05-14 02:19:50.112967] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.684 [2024-05-14 02:19:50.113015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.684 [2024-05-14 02:19:50.113030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.684 [2024-05-14 02:19:50.116159] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.684 [2024-05-14 02:19:50.116211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.684 [2024-05-14 02:19:50.116240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.685 [2024-05-14 02:19:50.120729] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.685 [2024-05-14 02:19:50.120792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.685 [2024-05-14 02:19:50.120822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.685 [2024-05-14 02:19:50.125121] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.685 [2024-05-14 02:19:50.125160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.685 [2024-05-14 02:19:50.125174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.685 [2024-05-14 02:19:50.130395] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.685 [2024-05-14 02:19:50.130449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.685 [2024-05-14 02:19:50.130461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.685 [2024-05-14 02:19:50.134979] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.685 [2024-05-14 02:19:50.135033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.685 [2024-05-14 02:19:50.135047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.685 [2024-05-14 02:19:50.140272] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.685 [2024-05-14 02:19:50.140326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.685 [2024-05-14 02:19:50.140354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.685 [2024-05-14 02:19:50.144728] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.685 [2024-05-14 02:19:50.144790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.685 [2024-05-14 02:19:50.144820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.685 [2024-05-14 02:19:50.148916] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.685 [2024-05-14 02:19:50.148964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.685 [2024-05-14 02:19:50.148978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.685 [2024-05-14 02:19:50.153412] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.685 [2024-05-14 02:19:50.153448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.685 [2024-05-14 02:19:50.153479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.685 [2024-05-14 02:19:50.157730] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.685 [2024-05-14 02:19:50.157793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.685 [2024-05-14 02:19:50.157834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.685 [2024-05-14 02:19:50.162346] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.685 [2024-05-14 02:19:50.162400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.685 [2024-05-14 02:19:50.162413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.685 [2024-05-14 02:19:50.166244] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.685 [2024-05-14 02:19:50.166314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.685 [2024-05-14 02:19:50.166327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.685 [2024-05-14 02:19:50.170060] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.685 [2024-05-14 02:19:50.170099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.685 [2024-05-14 02:19:50.170113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.685 [2024-05-14 02:19:50.173827] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.685 [2024-05-14 02:19:50.173905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.685 [2024-05-14 02:19:50.173928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.685 [2024-05-14 02:19:50.178561] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.685 [2024-05-14 02:19:50.178615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.685 [2024-05-14 02:19:50.178643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.685 [2024-05-14 02:19:50.182986] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.685 [2024-05-14 02:19:50.183026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.685 [2024-05-14 02:19:50.183039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.685 [2024-05-14 02:19:50.187602] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.685 [2024-05-14 02:19:50.187640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.685 [2024-05-14 02:19:50.187654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.685 [2024-05-14 02:19:50.192311] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.685 [2024-05-14 02:19:50.192382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.685 [2024-05-14 02:19:50.192410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.685 [2024-05-14 02:19:50.197059] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.685 [2024-05-14 02:19:50.197098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.685 [2024-05-14 02:19:50.197142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.685 [2024-05-14 02:19:50.201494] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.685 [2024-05-14 02:19:50.201547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.685 [2024-05-14 02:19:50.201560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.685 [2024-05-14 02:19:50.205395] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.685 [2024-05-14 02:19:50.205432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.685 [2024-05-14 02:19:50.205462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.685 [2024-05-14 02:19:50.209201] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.685 [2024-05-14 02:19:50.209239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.685 [2024-05-14 02:19:50.209251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.685 [2024-05-14 02:19:50.214673] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.685 [2024-05-14 02:19:50.214729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.685 [2024-05-14 02:19:50.214743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.685 [2024-05-14 02:19:50.218932] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.685 [2024-05-14 02:19:50.218986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.685 [2024-05-14 02:19:50.219000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.685 [2024-05-14 02:19:50.223482] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.685 [2024-05-14 02:19:50.223537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.685 [2024-05-14 02:19:50.223551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.685 [2024-05-14 02:19:50.228200] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.685 [2024-05-14 02:19:50.228267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.685 [2024-05-14 02:19:50.228280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.685 [2024-05-14 02:19:50.232707] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.685 [2024-05-14 02:19:50.232760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.685 [2024-05-14 02:19:50.232815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.685 [2024-05-14 02:19:50.237677] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.685 [2024-05-14 02:19:50.237716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.685 [2024-05-14 02:19:50.237745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.685 [2024-05-14 02:19:50.241902] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.685 [2024-05-14 02:19:50.241964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.686 [2024-05-14 02:19:50.241978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.686 [2024-05-14 02:19:50.245663] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.686 [2024-05-14 02:19:50.245714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.686 [2024-05-14 02:19:50.245743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.686 [2024-05-14 02:19:50.249247] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.686 [2024-05-14 02:19:50.249317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.686 [2024-05-14 02:19:50.249331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.686 [2024-05-14 02:19:50.253400] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.686 [2024-05-14 02:19:50.253452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.686 [2024-05-14 02:19:50.253482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.686 [2024-05-14 02:19:50.258047] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.686 [2024-05-14 02:19:50.258087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.686 [2024-05-14 02:19:50.258101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.686 [2024-05-14 02:19:50.263035] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.686 [2024-05-14 02:19:50.263090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.686 [2024-05-14 02:19:50.263104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.686 [2024-05-14 02:19:50.267708] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.686 [2024-05-14 02:19:50.267802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.686 [2024-05-14 02:19:50.267815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.947 [2024-05-14 02:19:50.272697] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.947 [2024-05-14 02:19:50.272750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.947 [2024-05-14 02:19:50.272790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.947 [2024-05-14 02:19:50.277188] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.947 [2024-05-14 02:19:50.277228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.947 [2024-05-14 02:19:50.277242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.947 [2024-05-14 02:19:50.281548] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.947 [2024-05-14 02:19:50.281630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.947 [2024-05-14 02:19:50.281659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.947 [2024-05-14 02:19:50.285557] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.947 [2024-05-14 02:19:50.285642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.947 [2024-05-14 02:19:50.285670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.947 [2024-05-14 02:19:50.290132] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.947 [2024-05-14 02:19:50.290172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.947 [2024-05-14 02:19:50.290185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.947 [2024-05-14 02:19:50.294222] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.947 [2024-05-14 02:19:50.294262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.947 [2024-05-14 02:19:50.294291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.947 [2024-05-14 02:19:50.298160] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.947 [2024-05-14 02:19:50.298199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.947 [2024-05-14 02:19:50.298213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.947 [2024-05-14 02:19:50.302370] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.947 [2024-05-14 02:19:50.302423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.947 [2024-05-14 02:19:50.302452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.947 [2024-05-14 02:19:50.307097] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.947 [2024-05-14 02:19:50.307149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.947 [2024-05-14 02:19:50.307192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.947 [2024-05-14 02:19:50.311515] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.947 [2024-05-14 02:19:50.311569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.947 [2024-05-14 02:19:50.311582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.947 [2024-05-14 02:19:50.315839] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.947 [2024-05-14 02:19:50.315918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.947 [2024-05-14 02:19:50.315966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.947 [2024-05-14 02:19:50.319969] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.947 [2024-05-14 02:19:50.320008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.947 [2024-05-14 02:19:50.320021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.947 [2024-05-14 02:19:50.324225] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.947 [2024-05-14 02:19:50.324266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.947 [2024-05-14 02:19:50.324280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.947 [2024-05-14 02:19:50.328563] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.947 [2024-05-14 02:19:50.328617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.947 [2024-05-14 02:19:50.328631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.947 [2024-05-14 02:19:50.332064] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.947 [2024-05-14 02:19:50.332104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.947 [2024-05-14 02:19:50.332118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.947 [2024-05-14 02:19:50.335879] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.947 [2024-05-14 02:19:50.335929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.947 [2024-05-14 02:19:50.335944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.947 [2024-05-14 02:19:50.339766] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.947 [2024-05-14 02:19:50.339814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.947 [2024-05-14 02:19:50.339829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.947 [2024-05-14 02:19:50.344083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.947 [2024-05-14 02:19:50.344122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.947 [2024-05-14 02:19:50.344135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.947 [2024-05-14 02:19:50.348809] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.947 [2024-05-14 02:19:50.348863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.947 [2024-05-14 02:19:50.348877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.947 [2024-05-14 02:19:50.353332] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.947 [2024-05-14 02:19:50.353375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.948 [2024-05-14 02:19:50.353390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.948 [2024-05-14 02:19:50.357985] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.948 [2024-05-14 02:19:50.358023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.948 [2024-05-14 02:19:50.358038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.948 [2024-05-14 02:19:50.362078] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.948 [2024-05-14 02:19:50.362116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.948 [2024-05-14 02:19:50.362129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.948 [2024-05-14 02:19:50.365744] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.948 [2024-05-14 02:19:50.365796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.948 [2024-05-14 02:19:50.365810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.948 [2024-05-14 02:19:50.370068] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.948 [2024-05-14 02:19:50.370108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.948 [2024-05-14 02:19:50.370128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.948 [2024-05-14 02:19:50.373880] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.948 [2024-05-14 02:19:50.373939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.948 [2024-05-14 02:19:50.373954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.948 [2024-05-14 02:19:50.377987] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.948 [2024-05-14 02:19:50.378027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.948 [2024-05-14 02:19:50.378041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.948 [2024-05-14 02:19:50.382611] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.948 [2024-05-14 02:19:50.382651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.948 [2024-05-14 02:19:50.382665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.948 [2024-05-14 02:19:50.387182] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.948 [2024-05-14 02:19:50.387238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.948 [2024-05-14 02:19:50.387252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.948 [2024-05-14 02:19:50.391143] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.948 [2024-05-14 02:19:50.391183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.948 [2024-05-14 02:19:50.391197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.948 [2024-05-14 02:19:50.394330] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.948 [2024-05-14 02:19:50.394402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.948 [2024-05-14 02:19:50.394416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.948 [2024-05-14 02:19:50.398226] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.948 [2024-05-14 02:19:50.398266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.948 [2024-05-14 02:19:50.398279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.948 [2024-05-14 02:19:50.402028] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.948 [2024-05-14 02:19:50.402066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.948 [2024-05-14 02:19:50.402080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.948 [2024-05-14 02:19:50.406518] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.948 [2024-05-14 02:19:50.406557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.948 [2024-05-14 02:19:50.406571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.948 [2024-05-14 02:19:50.410501] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.948 [2024-05-14 02:19:50.410535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.948 [2024-05-14 02:19:50.410549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.948 [2024-05-14 02:19:50.414187] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.948 [2024-05-14 02:19:50.414221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.948 [2024-05-14 02:19:50.414234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.948 [2024-05-14 02:19:50.418719] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.948 [2024-05-14 02:19:50.418757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.948 [2024-05-14 02:19:50.418786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.948 [2024-05-14 02:19:50.422395] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.948 [2024-05-14 02:19:50.422433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.948 [2024-05-14 02:19:50.422447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.948 [2024-05-14 02:19:50.426339] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.948 [2024-05-14 02:19:50.426382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.948 [2024-05-14 02:19:50.426395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.948 [2024-05-14 02:19:50.430131] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.948 [2024-05-14 02:19:50.430170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.948 [2024-05-14 02:19:50.430184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.948 [2024-05-14 02:19:50.433965] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.948 [2024-05-14 02:19:50.434002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.948 [2024-05-14 02:19:50.434016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.948 [2024-05-14 02:19:50.439349] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.948 [2024-05-14 02:19:50.439404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.948 [2024-05-14 02:19:50.439418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.948 [2024-05-14 02:19:50.443376] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.948 [2024-05-14 02:19:50.443416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.948 [2024-05-14 02:19:50.443430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.948 [2024-05-14 02:19:50.447783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.948 [2024-05-14 02:19:50.447847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.948 [2024-05-14 02:19:50.447877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.948 [2024-05-14 02:19:50.452733] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.948 [2024-05-14 02:19:50.452785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.948 [2024-05-14 02:19:50.452800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.949 [2024-05-14 02:19:50.457361] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.949 [2024-05-14 02:19:50.457413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.949 [2024-05-14 02:19:50.457434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.949 [2024-05-14 02:19:50.461495] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.949 [2024-05-14 02:19:50.461564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.949 [2024-05-14 02:19:50.461594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.949 [2024-05-14 02:19:50.465689] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.949 [2024-05-14 02:19:50.465727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.949 [2024-05-14 02:19:50.465741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.949 [2024-05-14 02:19:50.469630] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.949 [2024-05-14 02:19:50.469686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.949 [2024-05-14 02:19:50.469700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.949 [2024-05-14 02:19:50.473315] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.949 [2024-05-14 02:19:50.473369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.949 [2024-05-14 02:19:50.473383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.949 [2024-05-14 02:19:50.477689] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.949 [2024-05-14 02:19:50.477797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.949 [2024-05-14 02:19:50.477812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.949 [2024-05-14 02:19:50.482280] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.949 [2024-05-14 02:19:50.482351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.949 [2024-05-14 02:19:50.482364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.949 [2024-05-14 02:19:50.486718] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.949 [2024-05-14 02:19:50.486759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.949 [2024-05-14 02:19:50.486803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.949 [2024-05-14 02:19:50.490953] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.949 [2024-05-14 02:19:50.491028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.949 [2024-05-14 02:19:50.491042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.949 [2024-05-14 02:19:50.495790] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.949 [2024-05-14 02:19:50.495868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.949 [2024-05-14 02:19:50.495882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.949 [2024-05-14 02:19:50.499740] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.949 [2024-05-14 02:19:50.499806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.949 [2024-05-14 02:19:50.499820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.949 [2024-05-14 02:19:50.503609] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.949 [2024-05-14 02:19:50.503663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.949 [2024-05-14 02:19:50.503677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.949 [2024-05-14 02:19:50.508045] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.949 [2024-05-14 02:19:50.508098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.949 [2024-05-14 02:19:50.508111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.949 [2024-05-14 02:19:50.512629] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.949 [2024-05-14 02:19:50.512674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.949 [2024-05-14 02:19:50.512687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.949 [2024-05-14 02:19:50.516818] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.949 [2024-05-14 02:19:50.516914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.949 [2024-05-14 02:19:50.516928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.949 [2024-05-14 02:19:50.520980] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.949 [2024-05-14 02:19:50.521019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.949 [2024-05-14 02:19:50.521033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.949 [2024-05-14 02:19:50.524818] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.949 [2024-05-14 02:19:50.524873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.949 [2024-05-14 02:19:50.524888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.949 [2024-05-14 02:19:50.528587] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.949 [2024-05-14 02:19:50.528642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.949 [2024-05-14 02:19:50.528656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.949 [2024-05-14 02:19:50.531928] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:35.949 [2024-05-14 02:19:50.531967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.949 [2024-05-14 02:19:50.531981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.210 [2024-05-14 02:19:50.535588] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.210 [2024-05-14 02:19:50.535629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.210 [2024-05-14 02:19:50.535643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.210 [2024-05-14 02:19:50.539384] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.210 [2024-05-14 02:19:50.539454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.210 [2024-05-14 02:19:50.539468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.210 [2024-05-14 02:19:50.544406] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.210 [2024-05-14 02:19:50.544446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.210 [2024-05-14 02:19:50.544461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.210 [2024-05-14 02:19:50.548372] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.210 [2024-05-14 02:19:50.548412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.210 [2024-05-14 02:19:50.548426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.210 [2024-05-14 02:19:50.552982] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.210 [2024-05-14 02:19:50.553031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.210 [2024-05-14 02:19:50.553047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.210 [2024-05-14 02:19:50.558003] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.210 [2024-05-14 02:19:50.558042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.210 [2024-05-14 02:19:50.558056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.210 [2024-05-14 02:19:50.562600] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.210 [2024-05-14 02:19:50.562684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.210 [2024-05-14 02:19:50.562714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.210 [2024-05-14 02:19:50.566588] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.210 [2024-05-14 02:19:50.566640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.211 [2024-05-14 02:19:50.566670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.211 [2024-05-14 02:19:50.571704] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.211 [2024-05-14 02:19:50.571758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.211 [2024-05-14 02:19:50.571850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.211 [2024-05-14 02:19:50.576772] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.211 [2024-05-14 02:19:50.576835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.211 [2024-05-14 02:19:50.576881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.211 [2024-05-14 02:19:50.581910] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.211 [2024-05-14 02:19:50.581987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.211 [2024-05-14 02:19:50.582002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.211 [2024-05-14 02:19:50.586407] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.211 [2024-05-14 02:19:50.586445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.211 [2024-05-14 02:19:50.586459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.211 [2024-05-14 02:19:50.590746] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.211 [2024-05-14 02:19:50.590822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.211 [2024-05-14 02:19:50.590838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.211 [2024-05-14 02:19:50.595696] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.211 [2024-05-14 02:19:50.595792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.211 [2024-05-14 02:19:50.595806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.211 [2024-05-14 02:19:50.600575] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.211 [2024-05-14 02:19:50.600629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.211 [2024-05-14 02:19:50.600643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.211 [2024-05-14 02:19:50.604872] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.211 [2024-05-14 02:19:50.604936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.211 [2024-05-14 02:19:50.604966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.211 [2024-05-14 02:19:50.609298] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.211 [2024-05-14 02:19:50.609353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.211 [2024-05-14 02:19:50.609367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.211 [2024-05-14 02:19:50.613962] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.211 [2024-05-14 02:19:50.614003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.211 [2024-05-14 02:19:50.614017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.211 [2024-05-14 02:19:50.618219] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.211 [2024-05-14 02:19:50.618261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.211 [2024-05-14 02:19:50.618274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.211 [2024-05-14 02:19:50.622330] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.211 [2024-05-14 02:19:50.622371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.211 [2024-05-14 02:19:50.622385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.211 [2024-05-14 02:19:50.626925] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.211 [2024-05-14 02:19:50.626961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.211 [2024-05-14 02:19:50.626977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.211 [2024-05-14 02:19:50.631860] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.211 [2024-05-14 02:19:50.631909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.211 [2024-05-14 02:19:50.631924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.211 [2024-05-14 02:19:50.636409] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.211 [2024-05-14 02:19:50.636448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.211 [2024-05-14 02:19:50.636462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.211 [2024-05-14 02:19:50.640692] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.211 [2024-05-14 02:19:50.640732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.211 [2024-05-14 02:19:50.640746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.211 [2024-05-14 02:19:50.644645] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.211 [2024-05-14 02:19:50.644701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.211 [2024-05-14 02:19:50.644730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.211 [2024-05-14 02:19:50.648392] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.211 [2024-05-14 02:19:50.648447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.211 [2024-05-14 02:19:50.648461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.211 [2024-05-14 02:19:50.653376] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.211 [2024-05-14 02:19:50.653415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.211 [2024-05-14 02:19:50.653429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.211 [2024-05-14 02:19:50.657867] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.211 [2024-05-14 02:19:50.657959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.211 [2024-05-14 02:19:50.657973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.211 [2024-05-14 02:19:50.662384] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.211 [2024-05-14 02:19:50.662424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.211 [2024-05-14 02:19:50.662438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.211 [2024-05-14 02:19:50.666339] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.211 [2024-05-14 02:19:50.666377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.211 [2024-05-14 02:19:50.666391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.211 [2024-05-14 02:19:50.670607] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.211 [2024-05-14 02:19:50.670646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.211 [2024-05-14 02:19:50.670660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.212 [2024-05-14 02:19:50.674811] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.212 [2024-05-14 02:19:50.674860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.212 [2024-05-14 02:19:50.674874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.212 [2024-05-14 02:19:50.678861] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.212 [2024-05-14 02:19:50.678911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.212 [2024-05-14 02:19:50.678926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.212 [2024-05-14 02:19:50.682677] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.212 [2024-05-14 02:19:50.682717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.212 [2024-05-14 02:19:50.682731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.212 [2024-05-14 02:19:50.687404] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.212 [2024-05-14 02:19:50.687489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.212 [2024-05-14 02:19:50.687519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.212 [2024-05-14 02:19:50.692055] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.212 [2024-05-14 02:19:50.692109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.212 [2024-05-14 02:19:50.692139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.212 [2024-05-14 02:19:50.696230] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.212 [2024-05-14 02:19:50.696285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.212 [2024-05-14 02:19:50.696315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.212 [2024-05-14 02:19:50.701430] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.212 [2024-05-14 02:19:50.701530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.212 [2024-05-14 02:19:50.701550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.212 [2024-05-14 02:19:50.706681] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.212 [2024-05-14 02:19:50.706736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.212 [2024-05-14 02:19:50.706750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.212 [2024-05-14 02:19:50.711148] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.212 [2024-05-14 02:19:50.711200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.212 [2024-05-14 02:19:50.711230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.212 [2024-05-14 02:19:50.715774] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.212 [2024-05-14 02:19:50.715868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.212 [2024-05-14 02:19:50.715899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.212 [2024-05-14 02:19:50.720139] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.212 [2024-05-14 02:19:50.720228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.212 [2024-05-14 02:19:50.720257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.212 [2024-05-14 02:19:50.724988] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.212 [2024-05-14 02:19:50.725051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.212 [2024-05-14 02:19:50.725080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.212 [2024-05-14 02:19:50.729311] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.212 [2024-05-14 02:19:50.729379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.212 [2024-05-14 02:19:50.729408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.212 [2024-05-14 02:19:50.733293] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.212 [2024-05-14 02:19:50.733346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.212 [2024-05-14 02:19:50.733376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.212 [2024-05-14 02:19:50.738023] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.212 [2024-05-14 02:19:50.738063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.212 [2024-05-14 02:19:50.738077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.212 [2024-05-14 02:19:50.742092] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.212 [2024-05-14 02:19:50.742131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.212 [2024-05-14 02:19:50.742145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.212 [2024-05-14 02:19:50.746642] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.212 [2024-05-14 02:19:50.746694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.212 [2024-05-14 02:19:50.746724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.212 [2024-05-14 02:19:50.751762] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.212 [2024-05-14 02:19:50.751825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.212 [2024-05-14 02:19:50.751855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.212 [2024-05-14 02:19:50.756031] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.212 [2024-05-14 02:19:50.756085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.212 [2024-05-14 02:19:50.756119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.212 [2024-05-14 02:19:50.760278] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.212 [2024-05-14 02:19:50.760331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.212 [2024-05-14 02:19:50.760360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.212 [2024-05-14 02:19:50.764768] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.212 [2024-05-14 02:19:50.764846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.212 [2024-05-14 02:19:50.764875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.212 [2024-05-14 02:19:50.768620] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.212 [2024-05-14 02:19:50.768691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.212 [2024-05-14 02:19:50.768705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.212 [2024-05-14 02:19:50.772433] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.212 [2024-05-14 02:19:50.772487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.213 [2024-05-14 02:19:50.772517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.213 [2024-05-14 02:19:50.776659] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.213 [2024-05-14 02:19:50.776714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.213 [2024-05-14 02:19:50.776743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.213 [2024-05-14 02:19:50.782087] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.213 [2024-05-14 02:19:50.782128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.213 [2024-05-14 02:19:50.782141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.213 [2024-05-14 02:19:50.785658] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.213 [2024-05-14 02:19:50.785774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.213 [2024-05-14 02:19:50.785814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.213 [2024-05-14 02:19:50.790207] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.213 [2024-05-14 02:19:50.790247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.213 [2024-05-14 02:19:50.790261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.213 [2024-05-14 02:19:50.794527] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.213 [2024-05-14 02:19:50.794581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.213 [2024-05-14 02:19:50.794610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.473 [2024-05-14 02:19:50.799572] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.473 [2024-05-14 02:19:50.799628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.474 [2024-05-14 02:19:50.799641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.474 [2024-05-14 02:19:50.804224] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.474 [2024-05-14 02:19:50.804263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.474 [2024-05-14 02:19:50.804277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.474 [2024-05-14 02:19:50.808723] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.474 [2024-05-14 02:19:50.808800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.474 [2024-05-14 02:19:50.808816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.474 [2024-05-14 02:19:50.813811] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.474 [2024-05-14 02:19:50.813876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.474 [2024-05-14 02:19:50.813931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.474 [2024-05-14 02:19:50.819419] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.474 [2024-05-14 02:19:50.819471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.474 [2024-05-14 02:19:50.819500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.474 [2024-05-14 02:19:50.823991] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.474 [2024-05-14 02:19:50.824042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.474 [2024-05-14 02:19:50.824078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.474 [2024-05-14 02:19:50.828311] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.474 [2024-05-14 02:19:50.828362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.474 [2024-05-14 02:19:50.828392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.474 [2024-05-14 02:19:50.833202] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.474 [2024-05-14 02:19:50.833254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.474 [2024-05-14 02:19:50.833303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.474 [2024-05-14 02:19:50.837840] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.474 [2024-05-14 02:19:50.837887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.474 [2024-05-14 02:19:50.837900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.474 [2024-05-14 02:19:50.842590] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.474 [2024-05-14 02:19:50.842644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.474 [2024-05-14 02:19:50.842657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.474 [2024-05-14 02:19:50.845719] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.474 [2024-05-14 02:19:50.845757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.474 [2024-05-14 02:19:50.845784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.474 [2024-05-14 02:19:50.850035] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.474 [2024-05-14 02:19:50.850074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.474 [2024-05-14 02:19:50.850088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.474 [2024-05-14 02:19:50.854349] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.474 [2024-05-14 02:19:50.854387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.474 [2024-05-14 02:19:50.854416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.474 [2024-05-14 02:19:50.858466] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.474 [2024-05-14 02:19:50.858520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.474 [2024-05-14 02:19:50.858550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.474 [2024-05-14 02:19:50.863054] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.474 [2024-05-14 02:19:50.863138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.474 [2024-05-14 02:19:50.863167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.474 [2024-05-14 02:19:50.866593] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.474 [2024-05-14 02:19:50.866647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.474 [2024-05-14 02:19:50.866661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.474 [2024-05-14 02:19:50.871366] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.474 [2024-05-14 02:19:50.871440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.474 [2024-05-14 02:19:50.871470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.474 [2024-05-14 02:19:50.876233] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.474 [2024-05-14 02:19:50.876286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.474 [2024-05-14 02:19:50.876316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.474 [2024-05-14 02:19:50.880386] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.474 [2024-05-14 02:19:50.880455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.474 [2024-05-14 02:19:50.880499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.474 [2024-05-14 02:19:50.884395] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.474 [2024-05-14 02:19:50.884432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.474 [2024-05-14 02:19:50.884462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.474 [2024-05-14 02:19:50.889299] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.474 [2024-05-14 02:19:50.889336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.474 [2024-05-14 02:19:50.889365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.474 [2024-05-14 02:19:50.893393] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.474 [2024-05-14 02:19:50.893430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.474 [2024-05-14 02:19:50.893459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.474 [2024-05-14 02:19:50.897229] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.474 [2024-05-14 02:19:50.897282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.474 [2024-05-14 02:19:50.897312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.474 [2024-05-14 02:19:50.902040] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.474 [2024-05-14 02:19:50.902082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.474 [2024-05-14 02:19:50.902096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.474 [2024-05-14 02:19:50.906734] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.474 [2024-05-14 02:19:50.906830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.474 [2024-05-14 02:19:50.906845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.474 [2024-05-14 02:19:50.911667] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.474 [2024-05-14 02:19:50.911718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.474 [2024-05-14 02:19:50.911748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.474 [2024-05-14 02:19:50.915739] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.475 [2024-05-14 02:19:50.915787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.475 [2024-05-14 02:19:50.915818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.475 [2024-05-14 02:19:50.919773] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.475 [2024-05-14 02:19:50.919824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.475 [2024-05-14 02:19:50.919854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.475 [2024-05-14 02:19:50.924052] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.475 [2024-05-14 02:19:50.924104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.475 [2024-05-14 02:19:50.924133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.475 [2024-05-14 02:19:50.928503] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.475 [2024-05-14 02:19:50.928541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.475 [2024-05-14 02:19:50.928554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.475 [2024-05-14 02:19:50.932943] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.475 [2024-05-14 02:19:50.933013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.475 [2024-05-14 02:19:50.933043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.475 [2024-05-14 02:19:50.936582] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.475 [2024-05-14 02:19:50.936651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.475 [2024-05-14 02:19:50.936680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.475 [2024-05-14 02:19:50.940864] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.475 [2024-05-14 02:19:50.940937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.475 [2024-05-14 02:19:50.940952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.475 [2024-05-14 02:19:50.945346] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.475 [2024-05-14 02:19:50.945382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.475 [2024-05-14 02:19:50.945412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.475 [2024-05-14 02:19:50.949026] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.475 [2024-05-14 02:19:50.949094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.475 [2024-05-14 02:19:50.949123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.475 [2024-05-14 02:19:50.953772] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.475 [2024-05-14 02:19:50.953820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.475 [2024-05-14 02:19:50.953850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.475 [2024-05-14 02:19:50.957661] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.475 [2024-05-14 02:19:50.957700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.475 [2024-05-14 02:19:50.957733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.475 [2024-05-14 02:19:50.961966] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.475 [2024-05-14 02:19:50.962004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.475 [2024-05-14 02:19:50.962018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.475 [2024-05-14 02:19:50.966119] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.475 [2024-05-14 02:19:50.966158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.475 [2024-05-14 02:19:50.966172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.475 [2024-05-14 02:19:50.971130] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.475 [2024-05-14 02:19:50.971186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.475 [2024-05-14 02:19:50.971215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.475 [2024-05-14 02:19:50.975663] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.475 [2024-05-14 02:19:50.975717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.475 [2024-05-14 02:19:50.975747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.475 [2024-05-14 02:19:50.979697] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.475 [2024-05-14 02:19:50.979760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.475 [2024-05-14 02:19:50.979790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.475 [2024-05-14 02:19:50.984452] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.475 [2024-05-14 02:19:50.984562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.475 [2024-05-14 02:19:50.984592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.475 [2024-05-14 02:19:50.988647] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.475 [2024-05-14 02:19:50.988684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.475 [2024-05-14 02:19:50.988697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.475 [2024-05-14 02:19:50.993081] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.475 [2024-05-14 02:19:50.993135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.475 [2024-05-14 02:19:50.993149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.475 [2024-05-14 02:19:50.997870] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.475 [2024-05-14 02:19:50.997927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.475 [2024-05-14 02:19:50.997958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.475 [2024-05-14 02:19:51.002502] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.475 [2024-05-14 02:19:51.002542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.475 [2024-05-14 02:19:51.002563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.475 [2024-05-14 02:19:51.006168] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.475 [2024-05-14 02:19:51.006207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.475 [2024-05-14 02:19:51.006220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.475 [2024-05-14 02:19:51.010122] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.475 [2024-05-14 02:19:51.010163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.475 [2024-05-14 02:19:51.010177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.475 [2024-05-14 02:19:51.014536] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.475 [2024-05-14 02:19:51.014590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.475 [2024-05-14 02:19:51.014618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.475 [2024-05-14 02:19:51.019460] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.475 [2024-05-14 02:19:51.019529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.476 [2024-05-14 02:19:51.019557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.476 [2024-05-14 02:19:51.023240] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.476 [2024-05-14 02:19:51.023278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.476 [2024-05-14 02:19:51.023291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.476 [2024-05-14 02:19:51.027708] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.476 [2024-05-14 02:19:51.027789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.476 [2024-05-14 02:19:51.027819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.476 [2024-05-14 02:19:51.031844] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.476 [2024-05-14 02:19:51.031897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.476 [2024-05-14 02:19:51.031910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.476 [2024-05-14 02:19:51.036535] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.476 [2024-05-14 02:19:51.036574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.476 [2024-05-14 02:19:51.036588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.476 [2024-05-14 02:19:51.040610] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.476 [2024-05-14 02:19:51.040648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.476 [2024-05-14 02:19:51.040662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.476 [2024-05-14 02:19:51.044382] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.476 [2024-05-14 02:19:51.044428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.476 [2024-05-14 02:19:51.044459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.476 [2024-05-14 02:19:51.048289] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.476 [2024-05-14 02:19:51.048342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.476 [2024-05-14 02:19:51.048355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.476 [2024-05-14 02:19:51.052455] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.476 [2024-05-14 02:19:51.052513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.476 [2024-05-14 02:19:51.052527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.476 [2024-05-14 02:19:51.057230] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.476 [2024-05-14 02:19:51.057284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.476 [2024-05-14 02:19:51.057314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.735 [2024-05-14 02:19:51.061066] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.735 [2024-05-14 02:19:51.061121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.735 [2024-05-14 02:19:51.061135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.735 [2024-05-14 02:19:51.065742] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.735 [2024-05-14 02:19:51.065790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.735 [2024-05-14 02:19:51.065820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.735 [2024-05-14 02:19:51.069900] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.735 [2024-05-14 02:19:51.069989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.735 [2024-05-14 02:19:51.070003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.735 [2024-05-14 02:19:51.074355] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.735 [2024-05-14 02:19:51.074409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.735 [2024-05-14 02:19:51.074422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.735 [2024-05-14 02:19:51.078743] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.735 [2024-05-14 02:19:51.078789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.735 [2024-05-14 02:19:51.078802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.735 [2024-05-14 02:19:51.083397] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.735 [2024-05-14 02:19:51.083451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.735 [2024-05-14 02:19:51.083464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.736 [2024-05-14 02:19:51.087152] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.736 [2024-05-14 02:19:51.087221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.736 [2024-05-14 02:19:51.087234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.736 [2024-05-14 02:19:51.091925] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.736 [2024-05-14 02:19:51.092024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.736 [2024-05-14 02:19:51.092068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.736 [2024-05-14 02:19:51.096554] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.736 [2024-05-14 02:19:51.096591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.736 [2024-05-14 02:19:51.096605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.736 [2024-05-14 02:19:51.100829] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.736 [2024-05-14 02:19:51.100893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.736 [2024-05-14 02:19:51.100907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.736 [2024-05-14 02:19:51.104923] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.736 [2024-05-14 02:19:51.104976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.736 [2024-05-14 02:19:51.104989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.736 [2024-05-14 02:19:51.109556] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.736 [2024-05-14 02:19:51.109626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.736 [2024-05-14 02:19:51.109639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.736 [2024-05-14 02:19:51.114199] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.736 [2024-05-14 02:19:51.114240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.736 [2024-05-14 02:19:51.114254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.736 [2024-05-14 02:19:51.118588] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.736 [2024-05-14 02:19:51.118626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.736 [2024-05-14 02:19:51.118639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.736 [2024-05-14 02:19:51.122912] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.736 [2024-05-14 02:19:51.122979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.736 [2024-05-14 02:19:51.122993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.736 [2024-05-14 02:19:51.128266] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95fa30) 00:22:36.736 [2024-05-14 02:19:51.128305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.736 [2024-05-14 02:19:51.128318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.736 00:22:36.736 Latency(us) 00:22:36.736 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:36.736 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:22:36.736 nvme0n1 : 2.00 7044.09 880.51 0.00 0.00 2267.34 610.68 7923.90 00:22:36.736 =================================================================================================================== 00:22:36.736 Total : 7044.09 880.51 0.00 0.00 2267.34 610.68 7923.90 00:22:36.736 0 00:22:36.736 02:19:51 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:36.736 02:19:51 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:36.736 02:19:51 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:36.736 02:19:51 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:36.736 | .driver_specific 00:22:36.736 | .nvme_error 00:22:36.736 | .status_code 00:22:36.736 | .command_transient_transport_error' 00:22:36.996 02:19:51 -- host/digest.sh@71 -- # (( 454 > 0 )) 00:22:36.996 02:19:51 -- host/digest.sh@73 -- # killprocess 84902 00:22:36.996 02:19:51 -- common/autotest_common.sh@926 -- # '[' -z 84902 ']' 00:22:36.996 02:19:51 -- common/autotest_common.sh@930 -- # kill -0 84902 00:22:36.996 02:19:51 -- common/autotest_common.sh@931 -- # uname 00:22:36.996 02:19:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:36.996 02:19:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 84902 00:22:36.996 02:19:51 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:36.996 02:19:51 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:36.996 killing process with pid 84902 00:22:36.996 02:19:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 84902' 00:22:36.996 02:19:51 -- common/autotest_common.sh@945 -- # kill 84902 00:22:36.996 Received shutdown signal, test time was about 2.000000 seconds 00:22:36.996 00:22:36.996 Latency(us) 00:22:36.996 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:36.996 =================================================================================================================== 00:22:36.996 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:36.996 02:19:51 -- common/autotest_common.sh@950 -- # wait 84902 00:22:37.255 02:19:51 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:22:37.255 02:19:51 -- host/digest.sh@54 -- # local rw bs qd 00:22:37.255 02:19:51 -- host/digest.sh@56 -- # rw=randwrite 00:22:37.255 02:19:51 -- host/digest.sh@56 -- # bs=4096 00:22:37.255 02:19:51 -- host/digest.sh@56 -- # qd=128 00:22:37.255 02:19:51 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:22:37.255 02:19:51 -- host/digest.sh@58 -- # bperfpid=84997 00:22:37.255 02:19:51 -- host/digest.sh@60 -- # waitforlisten 84997 /var/tmp/bperf.sock 00:22:37.255 02:19:51 -- common/autotest_common.sh@819 -- # '[' -z 84997 ']' 00:22:37.255 02:19:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:37.255 02:19:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:37.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:37.255 02:19:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:37.255 02:19:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:37.255 02:19:51 -- common/autotest_common.sh@10 -- # set +x 00:22:37.255 [2024-05-14 02:19:51.720279] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:37.255 [2024-05-14 02:19:51.720369] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84997 ] 00:22:37.514 [2024-05-14 02:19:51.857584] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:37.514 [2024-05-14 02:19:51.919477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:38.450 02:19:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:38.450 02:19:52 -- common/autotest_common.sh@852 -- # return 0 00:22:38.450 02:19:52 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:38.450 02:19:52 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:38.450 02:19:53 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:38.450 02:19:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:38.450 02:19:53 -- common/autotest_common.sh@10 -- # set +x 00:22:38.450 02:19:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:38.450 02:19:53 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:38.450 02:19:53 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:39.017 nvme0n1 00:22:39.017 02:19:53 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:22:39.017 02:19:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:39.017 02:19:53 -- common/autotest_common.sh@10 -- # set +x 00:22:39.017 02:19:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:39.017 02:19:53 -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:39.017 02:19:53 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:39.017 Running I/O for 2 seconds... 00:22:39.017 [2024-05-14 02:19:53.539094] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190eea00 00:22:39.017 [2024-05-14 02:19:53.540644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.017 [2024-05-14 02:19:53.540689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.017 [2024-05-14 02:19:53.550864] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190e6fa8 00:22:39.017 [2024-05-14 02:19:53.552150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:7504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.017 [2024-05-14 02:19:53.552204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:39.017 [2024-05-14 02:19:53.563876] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190eea00 00:22:39.017 [2024-05-14 02:19:53.565226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.017 [2024-05-14 02:19:53.565263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:39.017 [2024-05-14 02:19:53.576938] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190e8d30 00:22:39.017 [2024-05-14 02:19:53.578038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.017 [2024-05-14 02:19:53.578077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:39.017 [2024-05-14 02:19:53.589717] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190e0630 00:22:39.017 [2024-05-14 02:19:53.590922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.017 [2024-05-14 02:19:53.591007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:39.017 [2024-05-14 02:19:53.603279] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190e2c28 00:22:39.017 [2024-05-14 02:19:53.603506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.018 [2024-05-14 02:19:53.603528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:39.275 [2024-05-14 02:19:53.617070] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190e1710 00:22:39.275 [2024-05-14 02:19:53.617966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.275 [2024-05-14 02:19:53.618005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:39.275 [2024-05-14 02:19:53.630021] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190ec840 00:22:39.275 [2024-05-14 02:19:53.630278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:6897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.275 [2024-05-14 02:19:53.630311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:39.275 [2024-05-14 02:19:53.643315] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190f8a50 00:22:39.275 [2024-05-14 02:19:53.643808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:18701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.275 [2024-05-14 02:19:53.643859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:39.275 [2024-05-14 02:19:53.656525] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190ec408 00:22:39.275 [2024-05-14 02:19:53.656991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.275 [2024-05-14 02:19:53.657030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:39.275 [2024-05-14 02:19:53.669872] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190fb480 00:22:39.275 [2024-05-14 02:19:53.670252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.275 [2024-05-14 02:19:53.670290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:39.275 [2024-05-14 02:19:53.682344] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190fc998 00:22:39.275 [2024-05-14 02:19:53.682773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.275 [2024-05-14 02:19:53.682822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:39.275 [2024-05-14 02:19:53.696072] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190eea00 00:22:39.275 [2024-05-14 02:19:53.696475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.275 [2024-05-14 02:19:53.696513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:39.275 [2024-05-14 02:19:53.709283] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190eb328 00:22:39.275 [2024-05-14 02:19:53.709611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.275 [2024-05-14 02:19:53.709652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:39.275 [2024-05-14 02:19:53.722733] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190eea00 00:22:39.275 [2024-05-14 02:19:53.722992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:20527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.275 [2024-05-14 02:19:53.723028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:39.275 [2024-05-14 02:19:53.735514] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190fc998 00:22:39.275 [2024-05-14 02:19:53.735751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:19974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.275 [2024-05-14 02:19:53.735790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:39.275 [2024-05-14 02:19:53.752208] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190fd208 00:22:39.275 [2024-05-14 02:19:53.753591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.275 [2024-05-14 02:19:53.753658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:39.275 [2024-05-14 02:19:53.763006] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190e1f80 00:22:39.275 [2024-05-14 02:19:53.763303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.275 [2024-05-14 02:19:53.763367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:39.275 [2024-05-14 02:19:53.778187] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190f9b30 00:22:39.275 [2024-05-14 02:19:53.778616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:6155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.275 [2024-05-14 02:19:53.778655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:39.275 [2024-05-14 02:19:53.791683] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190fac10 00:22:39.275 [2024-05-14 02:19:53.792295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:15068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.275 [2024-05-14 02:19:53.792337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:39.275 [2024-05-14 02:19:53.804289] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190fa3a0 00:22:39.275 [2024-05-14 02:19:53.804868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:24534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.275 [2024-05-14 02:19:53.804906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:39.275 [2024-05-14 02:19:53.817247] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190f0bc0 00:22:39.275 [2024-05-14 02:19:53.817813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.275 [2024-05-14 02:19:53.817861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:39.275 [2024-05-14 02:19:53.830113] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190e6b70 00:22:39.275 [2024-05-14 02:19:53.830560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:6184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.275 [2024-05-14 02:19:53.830599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:39.275 [2024-05-14 02:19:53.843078] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190e9168 00:22:39.275 [2024-05-14 02:19:53.843568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:9239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.275 [2024-05-14 02:19:53.843608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:39.275 [2024-05-14 02:19:53.855889] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190ef6a8 00:22:39.275 [2024-05-14 02:19:53.856344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.275 [2024-05-14 02:19:53.856382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:39.534 [2024-05-14 02:19:53.869286] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190e9168 00:22:39.534 [2024-05-14 02:19:53.869768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:13101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.534 [2024-05-14 02:19:53.869819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:39.534 [2024-05-14 02:19:53.882310] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190e6b70 00:22:39.534 [2024-05-14 02:19:53.882669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:16141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.534 [2024-05-14 02:19:53.882713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:39.534 [2024-05-14 02:19:53.895352] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190f0bc0 00:22:39.534 [2024-05-14 02:19:53.895794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:13511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.534 [2024-05-14 02:19:53.895844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:39.534 [2024-05-14 02:19:53.909774] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190e5ec8 00:22:39.534 [2024-05-14 02:19:53.911393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:6468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.534 [2024-05-14 02:19:53.911430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:39.534 [2024-05-14 02:19:53.923234] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190fac10 00:22:39.534 [2024-05-14 02:19:53.923892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.534 [2024-05-14 02:19:53.923959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:39.534 [2024-05-14 02:19:53.937095] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190df118 00:22:39.534 [2024-05-14 02:19:53.938108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:15790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.534 [2024-05-14 02:19:53.938146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:39.534 [2024-05-14 02:19:53.948064] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190e7818 00:22:39.534 [2024-05-14 02:19:53.949209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:15855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.534 [2024-05-14 02:19:53.949275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:39.534 [2024-05-14 02:19:53.960878] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190e4140 00:22:39.534 [2024-05-14 02:19:53.962142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:25095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.534 [2024-05-14 02:19:53.962191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:39.534 [2024-05-14 02:19:53.973404] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190f2d80 00:22:39.534 [2024-05-14 02:19:53.974911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:10082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.534 [2024-05-14 02:19:53.974962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:39.534 [2024-05-14 02:19:53.985865] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190e7818 00:22:39.534 [2024-05-14 02:19:53.987633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.534 [2024-05-14 02:19:53.987717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:39.534 [2024-05-14 02:19:54.000222] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190f6458 00:22:39.534 [2024-05-14 02:19:54.002085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.534 [2024-05-14 02:19:54.002125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.534 [2024-05-14 02:19:54.013485] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190e12d8 00:22:39.534 [2024-05-14 02:19:54.014306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:7208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.534 [2024-05-14 02:19:54.014343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:39.534 [2024-05-14 02:19:54.025993] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190f0788 00:22:39.534 [2024-05-14 02:19:54.027362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.534 [2024-05-14 02:19:54.027397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:39.534 [2024-05-14 02:19:54.039751] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190f7538 00:22:39.534 [2024-05-14 02:19:54.040292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:20012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.534 [2024-05-14 02:19:54.040329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:39.534 [2024-05-14 02:19:54.055297] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190fb8b8 00:22:39.534 [2024-05-14 02:19:54.056574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.534 [2024-05-14 02:19:54.056628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:39.534 [2024-05-14 02:19:54.065136] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190f0bc0 00:22:39.534 [2024-05-14 02:19:54.065336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.534 [2024-05-14 02:19:54.065376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:39.534 [2024-05-14 02:19:54.080511] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190f7970 00:22:39.534 [2024-05-14 02:19:54.081298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:2421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.534 [2024-05-14 02:19:54.081334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:39.534 [2024-05-14 02:19:54.093876] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190f35f0 00:22:39.534 [2024-05-14 02:19:54.094837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:18467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.534 [2024-05-14 02:19:54.094875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:39.534 [2024-05-14 02:19:54.105480] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190e23b8 00:22:39.534 [2024-05-14 02:19:54.105961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:9553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.534 [2024-05-14 02:19:54.105997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:39.792 [2024-05-14 02:19:54.122353] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190fe720 00:22:39.792 [2024-05-14 02:19:54.124288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:10540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.792 [2024-05-14 02:19:54.124325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.792 [2024-05-14 02:19:54.133249] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190fb480 00:22:39.792 [2024-05-14 02:19:54.134314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:11527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.792 [2024-05-14 02:19:54.134366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:39.792 [2024-05-14 02:19:54.145254] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190fb480 00:22:39.792 [2024-05-14 02:19:54.146678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:24077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.792 [2024-05-14 02:19:54.146715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.792 [2024-05-14 02:19:54.157621] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190fb480 00:22:39.792 [2024-05-14 02:19:54.159125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:3878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.792 [2024-05-14 02:19:54.159162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.792 [2024-05-14 02:19:54.169341] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190f35f0 00:22:39.792 [2024-05-14 02:19:54.170790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:11002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.792 [2024-05-14 02:19:54.170838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:39.793 [2024-05-14 02:19:54.184110] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190f4298 00:22:39.793 [2024-05-14 02:19:54.185237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:13809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.793 [2024-05-14 02:19:54.185288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.793 [2024-05-14 02:19:54.196148] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190dece0 00:22:39.793 [2024-05-14 02:19:54.196461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:17167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.793 [2024-05-14 02:19:54.196495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:39.793 [2024-05-14 02:19:54.209437] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190df118 00:22:39.793 [2024-05-14 02:19:54.210490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:8586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.793 [2024-05-14 02:19:54.210545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:39.793 [2024-05-14 02:19:54.220578] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190f9b30 00:22:39.793 [2024-05-14 02:19:54.220780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.793 [2024-05-14 02:19:54.220803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:39.793 [2024-05-14 02:19:54.236166] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190ed0b0 00:22:39.793 [2024-05-14 02:19:54.236858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.793 [2024-05-14 02:19:54.236896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:39.793 [2024-05-14 02:19:54.249026] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190e7818 00:22:39.793 [2024-05-14 02:19:54.249952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.793 [2024-05-14 02:19:54.249990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:39.793 [2024-05-14 02:19:54.261495] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190f0788 00:22:39.793 [2024-05-14 02:19:54.262495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.793 [2024-05-14 02:19:54.262530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:39.793 [2024-05-14 02:19:54.274104] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190e0630 00:22:39.793 [2024-05-14 02:19:54.275735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:21314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.793 [2024-05-14 02:19:54.275793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:39.793 [2024-05-14 02:19:54.287884] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190ff3c8 00:22:39.793 [2024-05-14 02:19:54.289570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.793 [2024-05-14 02:19:54.289608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:39.793 [2024-05-14 02:19:54.300092] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190edd58 00:22:39.793 [2024-05-14 02:19:54.300947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:4384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.793 [2024-05-14 02:19:54.300984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:39.793 [2024-05-14 02:19:54.313102] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190e38d0 00:22:39.793 [2024-05-14 02:19:54.313974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:5746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.793 [2024-05-14 02:19:54.314011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:39.793 [2024-05-14 02:19:54.324474] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190ef6a8 00:22:39.793 [2024-05-14 02:19:54.326075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:8007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.793 [2024-05-14 02:19:54.326112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:39.793 [2024-05-14 02:19:54.341259] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190e4578 00:22:39.793 [2024-05-14 02:19:54.342319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.793 [2024-05-14 02:19:54.342371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:39.793 [2024-05-14 02:19:54.353441] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190e7818 00:22:39.793 [2024-05-14 02:19:54.355074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:17579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.793 [2024-05-14 02:19:54.355111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:39.793 [2024-05-14 02:19:54.366077] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190f3e60 00:22:39.793 [2024-05-14 02:19:54.366620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:11 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.793 [2024-05-14 02:19:54.366655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:39.793 [2024-05-14 02:19:54.379372] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190f8a50 00:22:39.793 [2024-05-14 02:19:54.380463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:15354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.793 [2024-05-14 02:19:54.380499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:40.051 [2024-05-14 02:19:54.392643] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190de470 00:22:40.051 [2024-05-14 02:19:54.393462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:13146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.051 [2024-05-14 02:19:54.393548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:40.051 [2024-05-14 02:19:54.405891] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190dece0 00:22:40.051 [2024-05-14 02:19:54.406615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.051 [2024-05-14 02:19:54.406655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:40.051 [2024-05-14 02:19:54.419261] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190eaab8 00:22:40.051 [2024-05-14 02:19:54.420005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.051 [2024-05-14 02:19:54.420063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:40.051 [2024-05-14 02:19:54.432226] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190dfdc0 00:22:40.051 [2024-05-14 02:19:54.432915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.051 [2024-05-14 02:19:54.432952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:40.051 [2024-05-14 02:19:54.445278] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190f46d0 00:22:40.051 [2024-05-14 02:19:54.446598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:23917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.051 [2024-05-14 02:19:54.446634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:40.051 [2024-05-14 02:19:54.457559] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190e3d08 00:22:40.051 [2024-05-14 02:19:54.459164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:3904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.051 [2024-05-14 02:19:54.459201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:40.051 [2024-05-14 02:19:54.471015] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190e4578 00:22:40.051 [2024-05-14 02:19:54.472524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:9660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.051 [2024-05-14 02:19:54.472560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:40.051 [2024-05-14 02:19:54.483824] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190eea00 00:22:40.051 [2024-05-14 02:19:54.485260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:15947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.051 [2024-05-14 02:19:54.485298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.051 [2024-05-14 02:19:54.496735] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190f1868 00:22:40.051 [2024-05-14 02:19:54.498125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:13407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.051 [2024-05-14 02:19:54.498163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:40.051 [2024-05-14 02:19:54.509782] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190e3498 00:22:40.051 [2024-05-14 02:19:54.511068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.051 [2024-05-14 02:19:54.511107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:40.051 [2024-05-14 02:19:54.522613] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190feb58 00:22:40.051 [2024-05-14 02:19:54.524053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.051 [2024-05-14 02:19:54.524104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:40.051 [2024-05-14 02:19:54.535710] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190fc560 00:22:40.051 [2024-05-14 02:19:54.536780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.051 [2024-05-14 02:19:54.536838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:40.051 [2024-05-14 02:19:54.550735] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190f7da8 00:22:40.051 [2024-05-14 02:19:54.551648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.051 [2024-05-14 02:19:54.551684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:40.051 [2024-05-14 02:19:54.563797] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190e84c0 00:22:40.051 [2024-05-14 02:19:54.564719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:23786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.051 [2024-05-14 02:19:54.564753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:40.051 [2024-05-14 02:19:54.575578] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190ff3c8 00:22:40.051 [2024-05-14 02:19:54.576888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:21245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.051 [2024-05-14 02:19:54.576922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:40.051 [2024-05-14 02:19:54.588237] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190e95a0 00:22:40.051 [2024-05-14 02:19:54.589713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:1636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.051 [2024-05-14 02:19:54.589790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:40.051 [2024-05-14 02:19:54.601767] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190ebb98 00:22:40.051 [2024-05-14 02:19:54.603307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:6142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.051 [2024-05-14 02:19:54.603343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:40.051 [2024-05-14 02:19:54.613887] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190e2c28 00:22:40.051 [2024-05-14 02:19:54.614962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:24690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.051 [2024-05-14 02:19:54.615014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.051 [2024-05-14 02:19:54.629033] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190e4578 00:22:40.051 [2024-05-14 02:19:54.630097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:13134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.051 [2024-05-14 02:19:54.630136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.310 [2024-05-14 02:19:54.642130] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190df988 00:22:40.310 [2024-05-14 02:19:54.642966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:23475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.310 [2024-05-14 02:19:54.643004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.310 [2024-05-14 02:19:54.654472] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190e6b70 00:22:40.310 [2024-05-14 02:19:54.655414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.310 [2024-05-14 02:19:54.655452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:40.310 [2024-05-14 02:19:54.667396] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190f6cc8 00:22:40.310 [2024-05-14 02:19:54.668394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:10514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.310 [2024-05-14 02:19:54.668429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:40.310 [2024-05-14 02:19:54.680426] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190f8e88 00:22:40.310 [2024-05-14 02:19:54.681599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.310 [2024-05-14 02:19:54.681634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:40.310 [2024-05-14 02:19:54.694313] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190f7100 00:22:40.310 [2024-05-14 02:19:54.694986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.310 [2024-05-14 02:19:54.695023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:40.310 [2024-05-14 02:19:54.705478] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190fda78 00:22:40.310 [2024-05-14 02:19:54.705947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:3523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.310 [2024-05-14 02:19:54.705980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:40.310 [2024-05-14 02:19:54.721795] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190ff3c8 00:22:40.310 [2024-05-14 02:19:54.722791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.310 [2024-05-14 02:19:54.722835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:40.310 [2024-05-14 02:19:54.734640] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190e95a0 00:22:40.310 [2024-05-14 02:19:54.735602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.310 [2024-05-14 02:19:54.735638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:40.310 [2024-05-14 02:19:54.747986] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190de038 00:22:40.310 [2024-05-14 02:19:54.748936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:14233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.310 [2024-05-14 02:19:54.748989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:40.310 [2024-05-14 02:19:54.759389] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190f8618 00:22:40.310 [2024-05-14 02:19:54.760487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.310 [2024-05-14 02:19:54.760538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:40.310 [2024-05-14 02:19:54.774935] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190de8a8 00:22:40.310 [2024-05-14 02:19:54.775949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.310 [2024-05-14 02:19:54.775998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:40.310 [2024-05-14 02:19:54.787193] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190f6890 00:22:40.310 [2024-05-14 02:19:54.788082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:9822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.310 [2024-05-14 02:19:54.788131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:40.310 [2024-05-14 02:19:54.800256] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190e88f8 00:22:40.310 [2024-05-14 02:19:54.801544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:25499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.310 [2024-05-14 02:19:54.801580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:40.310 [2024-05-14 02:19:54.815307] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190f3a28 00:22:40.310 [2024-05-14 02:19:54.816642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:9274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.310 [2024-05-14 02:19:54.816677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:40.311 [2024-05-14 02:19:54.826874] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190ecc78 00:22:40.311 [2024-05-14 02:19:54.828331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.311 [2024-05-14 02:19:54.828384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:40.311 [2024-05-14 02:19:54.840385] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190f8e88 00:22:40.311 [2024-05-14 02:19:54.841230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.311 [2024-05-14 02:19:54.841286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.311 [2024-05-14 02:19:54.854511] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190fdeb0 00:22:40.311 [2024-05-14 02:19:54.855482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:24148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.311 [2024-05-14 02:19:54.855515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.311 [2024-05-14 02:19:54.867242] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190e0a68 00:22:40.311 [2024-05-14 02:19:54.869526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:16607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.311 [2024-05-14 02:19:54.869578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.311 [2024-05-14 02:19:54.879258] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190f20d8 00:22:40.311 [2024-05-14 02:19:54.880821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:2453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.311 [2024-05-14 02:19:54.880911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:40.311 [2024-05-14 02:19:54.892880] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190f7100 00:22:40.311 [2024-05-14 02:19:54.893314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:3349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.311 [2024-05-14 02:19:54.893351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:40.569 [2024-05-14 02:19:54.905557] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190ecc78 00:22:40.569 [2024-05-14 02:19:54.906053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:20498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.569 [2024-05-14 02:19:54.906099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:40.569 [2024-05-14 02:19:54.918811] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190e6738 00:22:40.569 [2024-05-14 02:19:54.920124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.569 [2024-05-14 02:19:54.920173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:40.569 [2024-05-14 02:19:54.930450] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190f20d8 00:22:40.569 [2024-05-14 02:19:54.930775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:24905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.569 [2024-05-14 02:19:54.930816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:40.569 [2024-05-14 02:19:54.946154] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190e3060 00:22:40.569 [2024-05-14 02:19:54.947241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.569 [2024-05-14 02:19:54.947276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:40.569 [2024-05-14 02:19:54.957608] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190e0a68 00:22:40.569 [2024-05-14 02:19:54.958840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.569 [2024-05-14 02:19:54.958876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:40.570 [2024-05-14 02:19:54.970507] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190f57b0 00:22:40.570 [2024-05-14 02:19:54.972620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:16505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.570 [2024-05-14 02:19:54.972672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:40.570 [2024-05-14 02:19:54.983679] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190fb480 00:22:40.570 [2024-05-14 02:19:54.984924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.570 [2024-05-14 02:19:54.984964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:40.570 [2024-05-14 02:19:54.996086] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190f31b8 00:22:40.570 [2024-05-14 02:19:54.997447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:4236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.570 [2024-05-14 02:19:54.997484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:40.570 [2024-05-14 02:19:55.011634] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190f46d0 00:22:40.570 [2024-05-14 02:19:55.012484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:18686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.570 [2024-05-14 02:19:55.012534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:40.570 [2024-05-14 02:19:55.023090] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190fef90 00:22:40.570 [2024-05-14 02:19:55.024147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.570 [2024-05-14 02:19:55.024197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:40.570 [2024-05-14 02:19:55.036379] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190f1868 00:22:40.570 [2024-05-14 02:19:55.037822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:15155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.570 [2024-05-14 02:19:55.037869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:40.570 [2024-05-14 02:19:55.049342] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190e99d8 00:22:40.570 [2024-05-14 02:19:55.050228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:5928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.570 [2024-05-14 02:19:55.050281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:40.570 [2024-05-14 02:19:55.062117] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190fef90 00:22:40.570 [2024-05-14 02:19:55.062365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.570 [2024-05-14 02:19:55.062396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.570 [2024-05-14 02:19:55.075316] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190e5220 00:22:40.570 [2024-05-14 02:19:55.076515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:24771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.570 [2024-05-14 02:19:55.076550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.570 [2024-05-14 02:19:55.088074] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190e4140 00:22:40.570 [2024-05-14 02:19:55.088345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:7207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.570 [2024-05-14 02:19:55.088392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:40.570 [2024-05-14 02:19:55.101211] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190e4140 00:22:40.570 [2024-05-14 02:19:55.102038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:9870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.570 [2024-05-14 02:19:55.102075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:40.570 [2024-05-14 02:19:55.114312] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190e6fa8 00:22:40.570 [2024-05-14 02:19:55.114829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.570 [2024-05-14 02:19:55.114879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.570 [2024-05-14 02:19:55.127124] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190f4b08 00:22:40.570 [2024-05-14 02:19:55.127590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.570 [2024-05-14 02:19:55.127627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:40.570 [2024-05-14 02:19:55.140196] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190eb760 00:22:40.570 [2024-05-14 02:19:55.140690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.570 [2024-05-14 02:19:55.140731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:40.570 [2024-05-14 02:19:55.154415] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190e99d8 00:22:40.570 [2024-05-14 02:19:55.156068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.570 [2024-05-14 02:19:55.156105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.829 [2024-05-14 02:19:55.167978] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190f57b0 00:22:40.829 [2024-05-14 02:19:55.168760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:17722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.829 [2024-05-14 02:19:55.168865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:40.829 [2024-05-14 02:19:55.181210] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190e7818 00:22:40.829 [2024-05-14 02:19:55.182172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:20681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.829 [2024-05-14 02:19:55.182209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:40.829 [2024-05-14 02:19:55.192724] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190fb048 00:22:40.829 [2024-05-14 02:19:55.193074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.829 [2024-05-14 02:19:55.193110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:40.829 [2024-05-14 02:19:55.207138] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190f0350 00:22:40.829 [2024-05-14 02:19:55.207693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.829 [2024-05-14 02:19:55.207730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:40.829 [2024-05-14 02:19:55.220624] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190ee190 00:22:40.829 [2024-05-14 02:19:55.221995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:24556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.829 [2024-05-14 02:19:55.222032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:40.829 [2024-05-14 02:19:55.233900] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190f0788 00:22:40.829 [2024-05-14 02:19:55.235623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.829 [2024-05-14 02:19:55.235660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:40.829 [2024-05-14 02:19:55.247584] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190f9f68 00:22:40.829 [2024-05-14 02:19:55.249764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:18568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.829 [2024-05-14 02:19:55.249870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:40.829 [2024-05-14 02:19:55.259802] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190eb760 00:22:40.829 [2024-05-14 02:19:55.261239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:7375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.829 [2024-05-14 02:19:55.261274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:40.829 [2024-05-14 02:19:55.272774] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190f0788 00:22:40.829 [2024-05-14 02:19:55.273144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:15361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.829 [2024-05-14 02:19:55.273182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:40.830 [2024-05-14 02:19:55.285519] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190de470 00:22:40.830 [2024-05-14 02:19:55.285882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.830 [2024-05-14 02:19:55.285919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:40.830 [2024-05-14 02:19:55.299152] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190fd640 00:22:40.830 [2024-05-14 02:19:55.300147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.830 [2024-05-14 02:19:55.300183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:40.830 [2024-05-14 02:19:55.312265] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190ee5c8 00:22:40.830 [2024-05-14 02:19:55.313632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.830 [2024-05-14 02:19:55.313701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:40.830 [2024-05-14 02:19:55.325446] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190e12d8 00:22:40.830 [2024-05-14 02:19:55.326016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.830 [2024-05-14 02:19:55.326051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:40.830 [2024-05-14 02:19:55.338422] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190de038 00:22:40.830 [2024-05-14 02:19:55.339009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:9768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.830 [2024-05-14 02:19:55.339047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:40.830 [2024-05-14 02:19:55.350457] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190ed0b0 00:22:40.830 [2024-05-14 02:19:55.350690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:1737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.830 [2024-05-14 02:19:55.350713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:40.830 [2024-05-14 02:19:55.366399] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190de470 00:22:40.830 [2024-05-14 02:19:55.367173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.830 [2024-05-14 02:19:55.367209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:40.830 [2024-05-14 02:19:55.378995] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190fb8b8 00:22:40.830 [2024-05-14 02:19:55.380879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.830 [2024-05-14 02:19:55.380926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:40.830 [2024-05-14 02:19:55.391473] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190f4b08 00:22:40.830 [2024-05-14 02:19:55.392261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.830 [2024-05-14 02:19:55.392315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:40.830 [2024-05-14 02:19:55.403757] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190e12d8 00:22:40.830 [2024-05-14 02:19:55.404929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:19873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.830 [2024-05-14 02:19:55.404997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:40.830 [2024-05-14 02:19:55.415986] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190fda78 00:22:40.830 [2024-05-14 02:19:55.416874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:19153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.830 [2024-05-14 02:19:55.416911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:41.091 [2024-05-14 02:19:55.430832] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190efae0 00:22:41.091 [2024-05-14 02:19:55.431760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:9670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.091 [2024-05-14 02:19:55.431819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:41.091 [2024-05-14 02:19:55.443543] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190f5be8 00:22:41.091 [2024-05-14 02:19:55.444921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:24138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.091 [2024-05-14 02:19:55.444967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:41.091 [2024-05-14 02:19:55.456401] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190ed4e8 00:22:41.091 [2024-05-14 02:19:55.456885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.091 [2024-05-14 02:19:55.456923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:41.091 [2024-05-14 02:19:55.469389] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190ed0b0 00:22:41.091 [2024-05-14 02:19:55.470044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:22555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.091 [2024-05-14 02:19:55.470084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.091 [2024-05-14 02:19:55.482281] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190ee5c8 00:22:41.091 [2024-05-14 02:19:55.483468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:2508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.091 [2024-05-14 02:19:55.483504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:41.091 [2024-05-14 02:19:55.494926] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190fbcf0 00:22:41.091 [2024-05-14 02:19:55.496061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:24255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.091 [2024-05-14 02:19:55.496098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:41.091 [2024-05-14 02:19:55.509199] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190f2d80 00:22:41.091 [2024-05-14 02:19:55.510349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.091 [2024-05-14 02:19:55.510385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:41.091 [2024-05-14 02:19:55.518766] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fa10) with pdu=0x2000190e3060 00:22:41.091 [2024-05-14 02:19:55.518890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:13561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.091 [2024-05-14 02:19:55.518913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:41.091 00:22:41.091 Latency(us) 00:22:41.091 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:41.091 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:41.091 nvme0n1 : 2.00 19355.11 75.61 0.00 0.00 6606.49 2591.65 17277.67 00:22:41.091 =================================================================================================================== 00:22:41.091 Total : 19355.11 75.61 0.00 0.00 6606.49 2591.65 17277.67 00:22:41.091 0 00:22:41.091 02:19:55 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:41.092 02:19:55 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:41.092 | .driver_specific 00:22:41.092 | .nvme_error 00:22:41.092 | .status_code 00:22:41.092 | .command_transient_transport_error' 00:22:41.092 02:19:55 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:41.092 02:19:55 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:41.353 02:19:55 -- host/digest.sh@71 -- # (( 152 > 0 )) 00:22:41.353 02:19:55 -- host/digest.sh@73 -- # killprocess 84997 00:22:41.353 02:19:55 -- common/autotest_common.sh@926 -- # '[' -z 84997 ']' 00:22:41.353 02:19:55 -- common/autotest_common.sh@930 -- # kill -0 84997 00:22:41.353 02:19:55 -- common/autotest_common.sh@931 -- # uname 00:22:41.353 02:19:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:41.353 02:19:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 84997 00:22:41.353 02:19:55 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:41.353 02:19:55 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:41.353 killing process with pid 84997 00:22:41.353 02:19:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 84997' 00:22:41.353 02:19:55 -- common/autotest_common.sh@945 -- # kill 84997 00:22:41.353 Received shutdown signal, test time was about 2.000000 seconds 00:22:41.353 00:22:41.353 Latency(us) 00:22:41.353 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:41.353 =================================================================================================================== 00:22:41.353 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:41.353 02:19:55 -- common/autotest_common.sh@950 -- # wait 84997 00:22:41.612 02:19:56 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:22:41.612 02:19:56 -- host/digest.sh@54 -- # local rw bs qd 00:22:41.612 02:19:56 -- host/digest.sh@56 -- # rw=randwrite 00:22:41.612 02:19:56 -- host/digest.sh@56 -- # bs=131072 00:22:41.612 02:19:56 -- host/digest.sh@56 -- # qd=16 00:22:41.612 02:19:56 -- host/digest.sh@58 -- # bperfpid=85083 00:22:41.612 02:19:56 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:22:41.612 02:19:56 -- host/digest.sh@60 -- # waitforlisten 85083 /var/tmp/bperf.sock 00:22:41.612 02:19:56 -- common/autotest_common.sh@819 -- # '[' -z 85083 ']' 00:22:41.612 02:19:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:41.612 02:19:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:41.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:41.612 02:19:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:41.612 02:19:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:41.612 02:19:56 -- common/autotest_common.sh@10 -- # set +x 00:22:41.612 [2024-05-14 02:19:56.161495] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:41.612 [2024-05-14 02:19:56.161624] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85083 ] 00:22:41.612 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:41.612 Zero copy mechanism will not be used. 00:22:41.871 [2024-05-14 02:19:56.308616] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.871 [2024-05-14 02:19:56.368330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:42.806 02:19:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:42.806 02:19:57 -- common/autotest_common.sh@852 -- # return 0 00:22:42.806 02:19:57 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:42.806 02:19:57 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:43.065 02:19:57 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:43.065 02:19:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:43.065 02:19:57 -- common/autotest_common.sh@10 -- # set +x 00:22:43.065 02:19:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:43.065 02:19:57 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:43.065 02:19:57 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:43.323 nvme0n1 00:22:43.323 02:19:57 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:22:43.323 02:19:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:43.323 02:19:57 -- common/autotest_common.sh@10 -- # set +x 00:22:43.323 02:19:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:43.323 02:19:57 -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:43.323 02:19:57 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:43.583 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:43.583 Zero copy mechanism will not be used. 00:22:43.583 Running I/O for 2 seconds... 00:22:43.583 [2024-05-14 02:19:57.975960] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.583 [2024-05-14 02:19:57.976297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.583 [2024-05-14 02:19:57.976329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.583 [2024-05-14 02:19:57.981033] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.583 [2024-05-14 02:19:57.981188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.583 [2024-05-14 02:19:57.981211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.583 [2024-05-14 02:19:57.985599] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.583 [2024-05-14 02:19:57.985727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.583 [2024-05-14 02:19:57.985750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.583 [2024-05-14 02:19:57.990502] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.583 [2024-05-14 02:19:57.990679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.583 [2024-05-14 02:19:57.990700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.583 [2024-05-14 02:19:57.995329] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.583 [2024-05-14 02:19:57.995439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.583 [2024-05-14 02:19:57.995461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.583 [2024-05-14 02:19:58.000326] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.583 [2024-05-14 02:19:58.000432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.583 [2024-05-14 02:19:58.000453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.583 [2024-05-14 02:19:58.005256] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.583 [2024-05-14 02:19:58.005410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.583 [2024-05-14 02:19:58.005431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.583 [2024-05-14 02:19:58.010206] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.583 [2024-05-14 02:19:58.010452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.583 [2024-05-14 02:19:58.010485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.583 [2024-05-14 02:19:58.014798] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.583 [2024-05-14 02:19:58.015032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.583 [2024-05-14 02:19:58.015065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.583 [2024-05-14 02:19:58.019788] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.583 [2024-05-14 02:19:58.020003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.583 [2024-05-14 02:19:58.020024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.583 [2024-05-14 02:19:58.024634] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.583 [2024-05-14 02:19:58.024762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.583 [2024-05-14 02:19:58.024782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.583 [2024-05-14 02:19:58.029668] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.583 [2024-05-14 02:19:58.029804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.583 [2024-05-14 02:19:58.029824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.583 [2024-05-14 02:19:58.034303] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.583 [2024-05-14 02:19:58.034466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.583 [2024-05-14 02:19:58.034486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.583 [2024-05-14 02:19:58.039186] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.583 [2024-05-14 02:19:58.039353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.583 [2024-05-14 02:19:58.039373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.583 [2024-05-14 02:19:58.043651] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.583 [2024-05-14 02:19:58.043799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.583 [2024-05-14 02:19:58.043831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.584 [2024-05-14 02:19:58.048842] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.584 [2024-05-14 02:19:58.049130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.584 [2024-05-14 02:19:58.049163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.584 [2024-05-14 02:19:58.053537] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.584 [2024-05-14 02:19:58.053821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.584 [2024-05-14 02:19:58.053916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.584 [2024-05-14 02:19:58.058450] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.584 [2024-05-14 02:19:58.058587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.584 [2024-05-14 02:19:58.058607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.584 [2024-05-14 02:19:58.063255] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.584 [2024-05-14 02:19:58.063404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.584 [2024-05-14 02:19:58.063427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.584 [2024-05-14 02:19:58.067976] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.584 [2024-05-14 02:19:58.068118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.584 [2024-05-14 02:19:58.068140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.584 [2024-05-14 02:19:58.072733] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.584 [2024-05-14 02:19:58.072890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.584 [2024-05-14 02:19:58.072911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.584 [2024-05-14 02:19:58.077798] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.584 [2024-05-14 02:19:58.077986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.584 [2024-05-14 02:19:58.078009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.584 [2024-05-14 02:19:58.082439] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.584 [2024-05-14 02:19:58.082610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.584 [2024-05-14 02:19:58.082632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.584 [2024-05-14 02:19:58.087297] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.584 [2024-05-14 02:19:58.087564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.584 [2024-05-14 02:19:58.087601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.584 [2024-05-14 02:19:58.092290] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.584 [2024-05-14 02:19:58.092547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.584 [2024-05-14 02:19:58.092613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.584 [2024-05-14 02:19:58.096850] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.584 [2024-05-14 02:19:58.097005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.584 [2024-05-14 02:19:58.097027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.584 [2024-05-14 02:19:58.101564] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.584 [2024-05-14 02:19:58.101684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.584 [2024-05-14 02:19:58.101705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.584 [2024-05-14 02:19:58.106184] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.584 [2024-05-14 02:19:58.106317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.584 [2024-05-14 02:19:58.106339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.584 [2024-05-14 02:19:58.110915] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.584 [2024-05-14 02:19:58.111020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.584 [2024-05-14 02:19:58.111039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.584 [2024-05-14 02:19:58.115693] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.584 [2024-05-14 02:19:58.115920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.584 [2024-05-14 02:19:58.115954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.584 [2024-05-14 02:19:58.120765] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.584 [2024-05-14 02:19:58.120944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.584 [2024-05-14 02:19:58.120966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.584 [2024-05-14 02:19:58.125662] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.584 [2024-05-14 02:19:58.125975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.584 [2024-05-14 02:19:58.126008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.584 [2024-05-14 02:19:58.130537] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.584 [2024-05-14 02:19:58.130764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.584 [2024-05-14 02:19:58.130786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.584 [2024-05-14 02:19:58.135438] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.584 [2024-05-14 02:19:58.135637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.584 [2024-05-14 02:19:58.135659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.584 [2024-05-14 02:19:58.140365] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.584 [2024-05-14 02:19:58.140497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.584 [2024-05-14 02:19:58.140517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.584 [2024-05-14 02:19:58.144920] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.584 [2024-05-14 02:19:58.145081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.584 [2024-05-14 02:19:58.145104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.584 [2024-05-14 02:19:58.150385] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.584 [2024-05-14 02:19:58.150523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.584 [2024-05-14 02:19:58.150561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.584 [2024-05-14 02:19:58.155132] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.584 [2024-05-14 02:19:58.155322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.584 [2024-05-14 02:19:58.155345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.584 [2024-05-14 02:19:58.159954] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.584 [2024-05-14 02:19:58.160143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.584 [2024-05-14 02:19:58.160164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.584 [2024-05-14 02:19:58.164858] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.584 [2024-05-14 02:19:58.165120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.584 [2024-05-14 02:19:58.165164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.584 [2024-05-14 02:19:58.169956] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.585 [2024-05-14 02:19:58.170167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.585 [2024-05-14 02:19:58.170200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.845 [2024-05-14 02:19:58.174790] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.845 [2024-05-14 02:19:58.175016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.845 [2024-05-14 02:19:58.175052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.845 [2024-05-14 02:19:58.179638] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.845 [2024-05-14 02:19:58.179797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.845 [2024-05-14 02:19:58.179834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.845 [2024-05-14 02:19:58.184188] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.845 [2024-05-14 02:19:58.184302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.845 [2024-05-14 02:19:58.184322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.845 [2024-05-14 02:19:58.189049] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.845 [2024-05-14 02:19:58.189161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.845 [2024-05-14 02:19:58.189183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.845 [2024-05-14 02:19:58.193798] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.845 [2024-05-14 02:19:58.194039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.845 [2024-05-14 02:19:58.194062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.845 [2024-05-14 02:19:58.198915] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.845 [2024-05-14 02:19:58.199059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.845 [2024-05-14 02:19:58.199081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.845 [2024-05-14 02:19:58.203859] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.845 [2024-05-14 02:19:58.204117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.845 [2024-05-14 02:19:58.204140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.845 [2024-05-14 02:19:58.208442] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.845 [2024-05-14 02:19:58.208746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.845 [2024-05-14 02:19:58.208768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.845 [2024-05-14 02:19:58.213537] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.845 [2024-05-14 02:19:58.213724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.845 [2024-05-14 02:19:58.213744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.845 [2024-05-14 02:19:58.218377] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.845 [2024-05-14 02:19:58.218520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.845 [2024-05-14 02:19:58.218542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.845 [2024-05-14 02:19:58.223225] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.845 [2024-05-14 02:19:58.223344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.845 [2024-05-14 02:19:58.223381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.845 [2024-05-14 02:19:58.227773] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.845 [2024-05-14 02:19:58.227919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.845 [2024-05-14 02:19:58.227953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.846 [2024-05-14 02:19:58.232864] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.846 [2024-05-14 02:19:58.233028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.846 [2024-05-14 02:19:58.233065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.846 [2024-05-14 02:19:58.237356] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.846 [2024-05-14 02:19:58.237495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.846 [2024-05-14 02:19:58.237515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.846 [2024-05-14 02:19:58.242514] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.846 [2024-05-14 02:19:58.242774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.846 [2024-05-14 02:19:58.242795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.846 [2024-05-14 02:19:58.247411] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.846 [2024-05-14 02:19:58.247690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.846 [2024-05-14 02:19:58.247714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.846 [2024-05-14 02:19:58.252417] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.846 [2024-05-14 02:19:58.252705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.846 [2024-05-14 02:19:58.252724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.846 [2024-05-14 02:19:58.257392] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.846 [2024-05-14 02:19:58.257534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.846 [2024-05-14 02:19:58.257555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.846 [2024-05-14 02:19:58.262346] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.846 [2024-05-14 02:19:58.262443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.846 [2024-05-14 02:19:58.262465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.846 [2024-05-14 02:19:58.267118] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.846 [2024-05-14 02:19:58.267232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.846 [2024-05-14 02:19:58.267252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.846 [2024-05-14 02:19:58.271907] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.846 [2024-05-14 02:19:58.272101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.846 [2024-05-14 02:19:58.272121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.846 [2024-05-14 02:19:58.276315] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.846 [2024-05-14 02:19:58.276509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.846 [2024-05-14 02:19:58.276528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.846 [2024-05-14 02:19:58.281572] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.846 [2024-05-14 02:19:58.281833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.846 [2024-05-14 02:19:58.281853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.846 [2024-05-14 02:19:58.286288] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.846 [2024-05-14 02:19:58.286554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.846 [2024-05-14 02:19:58.286575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.846 [2024-05-14 02:19:58.291201] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.846 [2024-05-14 02:19:58.291400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.846 [2024-05-14 02:19:58.291421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.846 [2024-05-14 02:19:58.296130] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.846 [2024-05-14 02:19:58.296263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.846 [2024-05-14 02:19:58.296282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.846 [2024-05-14 02:19:58.300876] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.846 [2024-05-14 02:19:58.301003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.846 [2024-05-14 02:19:58.301025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.846 [2024-05-14 02:19:58.305464] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.846 [2024-05-14 02:19:58.305577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.846 [2024-05-14 02:19:58.305597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.846 [2024-05-14 02:19:58.310560] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.846 [2024-05-14 02:19:58.310735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.846 [2024-05-14 02:19:58.310758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.846 [2024-05-14 02:19:58.315356] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.846 [2024-05-14 02:19:58.315494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.846 [2024-05-14 02:19:58.315514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.846 [2024-05-14 02:19:58.320355] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.846 [2024-05-14 02:19:58.320619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.846 [2024-05-14 02:19:58.320641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.846 [2024-05-14 02:19:58.325223] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.846 [2024-05-14 02:19:58.325515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.846 [2024-05-14 02:19:58.325537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.846 [2024-05-14 02:19:58.329680] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.846 [2024-05-14 02:19:58.329934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.846 [2024-05-14 02:19:58.329957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.846 [2024-05-14 02:19:58.334709] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.846 [2024-05-14 02:19:58.334841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.846 [2024-05-14 02:19:58.334861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.846 [2024-05-14 02:19:58.339059] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.846 [2024-05-14 02:19:58.339186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.846 [2024-05-14 02:19:58.339209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.846 [2024-05-14 02:19:58.343985] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.846 [2024-05-14 02:19:58.344117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.846 [2024-05-14 02:19:58.344138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.846 [2024-05-14 02:19:58.348614] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.846 [2024-05-14 02:19:58.348783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.846 [2024-05-14 02:19:58.348803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.847 [2024-05-14 02:19:58.353560] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.847 [2024-05-14 02:19:58.353726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.847 [2024-05-14 02:19:58.353747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.847 [2024-05-14 02:19:58.358476] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.847 [2024-05-14 02:19:58.358693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.847 [2024-05-14 02:19:58.358714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.847 [2024-05-14 02:19:58.363532] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.847 [2024-05-14 02:19:58.363759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.847 [2024-05-14 02:19:58.363780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.847 [2024-05-14 02:19:58.367959] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.847 [2024-05-14 02:19:58.368189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.847 [2024-05-14 02:19:58.368211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.847 [2024-05-14 02:19:58.372907] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.847 [2024-05-14 02:19:58.373038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.847 [2024-05-14 02:19:58.373060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.847 [2024-05-14 02:19:58.377513] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.847 [2024-05-14 02:19:58.377626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.847 [2024-05-14 02:19:58.377693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.847 [2024-05-14 02:19:58.382636] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.847 [2024-05-14 02:19:58.382776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.847 [2024-05-14 02:19:58.382798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.847 [2024-05-14 02:19:58.387426] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.847 [2024-05-14 02:19:58.387626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.847 [2024-05-14 02:19:58.387648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.847 [2024-05-14 02:19:58.392500] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.847 [2024-05-14 02:19:58.392647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.847 [2024-05-14 02:19:58.392683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.847 [2024-05-14 02:19:58.397547] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.847 [2024-05-14 02:19:58.397805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.847 [2024-05-14 02:19:58.397843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.847 [2024-05-14 02:19:58.402519] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.847 [2024-05-14 02:19:58.402746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.847 [2024-05-14 02:19:58.402766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.847 [2024-05-14 02:19:58.407354] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.847 [2024-05-14 02:19:58.407574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.847 [2024-05-14 02:19:58.407595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.847 [2024-05-14 02:19:58.411896] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.847 [2024-05-14 02:19:58.412018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.847 [2024-05-14 02:19:58.412039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.847 [2024-05-14 02:19:58.416579] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.847 [2024-05-14 02:19:58.416699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.847 [2024-05-14 02:19:58.416718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.847 [2024-05-14 02:19:58.421485] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.847 [2024-05-14 02:19:58.421640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.847 [2024-05-14 02:19:58.421661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.847 [2024-05-14 02:19:58.426564] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.847 [2024-05-14 02:19:58.426752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.847 [2024-05-14 02:19:58.426772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.847 [2024-05-14 02:19:58.431609] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:43.847 [2024-05-14 02:19:58.431825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.847 [2024-05-14 02:19:58.431861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.108 [2024-05-14 02:19:58.436935] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.108 [2024-05-14 02:19:58.437170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.108 [2024-05-14 02:19:58.437200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.108 [2024-05-14 02:19:58.441890] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.108 [2024-05-14 02:19:58.442155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.108 [2024-05-14 02:19:58.442191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.108 [2024-05-14 02:19:58.446825] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.108 [2024-05-14 02:19:58.447079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.108 [2024-05-14 02:19:58.447099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.108 [2024-05-14 02:19:58.451480] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.108 [2024-05-14 02:19:58.451640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.108 [2024-05-14 02:19:58.451675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.108 [2024-05-14 02:19:58.456279] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.108 [2024-05-14 02:19:58.456416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.108 [2024-05-14 02:19:58.456463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.108 [2024-05-14 02:19:58.460920] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.108 [2024-05-14 02:19:58.461061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.108 [2024-05-14 02:19:58.461085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.108 [2024-05-14 02:19:58.465952] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.108 [2024-05-14 02:19:58.466126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.108 [2024-05-14 02:19:58.466155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.108 [2024-05-14 02:19:58.470437] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.108 [2024-05-14 02:19:58.470576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.108 [2024-05-14 02:19:58.470596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.108 [2024-05-14 02:19:58.475492] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.108 [2024-05-14 02:19:58.475718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.108 [2024-05-14 02:19:58.475739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.108 [2024-05-14 02:19:58.479951] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.109 [2024-05-14 02:19:58.480207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.109 [2024-05-14 02:19:58.480244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.109 [2024-05-14 02:19:58.484997] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.109 [2024-05-14 02:19:58.485203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.109 [2024-05-14 02:19:58.485223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.109 [2024-05-14 02:19:58.489772] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.109 [2024-05-14 02:19:58.489911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.109 [2024-05-14 02:19:58.489957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.109 [2024-05-14 02:19:58.494817] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.109 [2024-05-14 02:19:58.494980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.109 [2024-05-14 02:19:58.495016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.109 [2024-05-14 02:19:58.499844] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.109 [2024-05-14 02:19:58.500035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.109 [2024-05-14 02:19:58.500056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.109 [2024-05-14 02:19:58.504977] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.109 [2024-05-14 02:19:58.505183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.109 [2024-05-14 02:19:58.505205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.109 [2024-05-14 02:19:58.509594] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.109 [2024-05-14 02:19:58.509743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.109 [2024-05-14 02:19:58.509763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.109 [2024-05-14 02:19:58.514941] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.109 [2024-05-14 02:19:58.515190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.109 [2024-05-14 02:19:58.515251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.109 [2024-05-14 02:19:58.519500] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.109 [2024-05-14 02:19:58.519722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.109 [2024-05-14 02:19:58.519744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.109 [2024-05-14 02:19:58.524521] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.109 [2024-05-14 02:19:58.524716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.109 [2024-05-14 02:19:58.524736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.109 [2024-05-14 02:19:58.529239] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.109 [2024-05-14 02:19:58.529376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.109 [2024-05-14 02:19:58.529397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.109 [2024-05-14 02:19:58.533900] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.109 [2024-05-14 02:19:58.534056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.109 [2024-05-14 02:19:58.534079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.109 [2024-05-14 02:19:58.538696] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.109 [2024-05-14 02:19:58.538824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.109 [2024-05-14 02:19:58.538845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.109 [2024-05-14 02:19:58.543524] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.109 [2024-05-14 02:19:58.543698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.109 [2024-05-14 02:19:58.543749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.109 [2024-05-14 02:19:58.548318] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.109 [2024-05-14 02:19:58.548464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.109 [2024-05-14 02:19:58.548483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.109 [2024-05-14 02:19:58.553159] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.109 [2024-05-14 02:19:58.553459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.109 [2024-05-14 02:19:58.553513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.109 [2024-05-14 02:19:58.558151] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.109 [2024-05-14 02:19:58.558388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.109 [2024-05-14 02:19:58.558436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.109 [2024-05-14 02:19:58.562771] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.109 [2024-05-14 02:19:58.563010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.109 [2024-05-14 02:19:58.563053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.109 [2024-05-14 02:19:58.567533] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.109 [2024-05-14 02:19:58.567661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.109 [2024-05-14 02:19:58.567683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.109 [2024-05-14 02:19:58.572285] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.109 [2024-05-14 02:19:58.572443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.109 [2024-05-14 02:19:58.572465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.109 [2024-05-14 02:19:58.577220] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.109 [2024-05-14 02:19:58.577334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.109 [2024-05-14 02:19:58.577354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.109 [2024-05-14 02:19:58.582066] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.109 [2024-05-14 02:19:58.582230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.109 [2024-05-14 02:19:58.582267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.109 [2024-05-14 02:19:58.586947] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.109 [2024-05-14 02:19:58.587085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.109 [2024-05-14 02:19:58.587121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.109 [2024-05-14 02:19:58.591706] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.109 [2024-05-14 02:19:58.592015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.109 [2024-05-14 02:19:58.592094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.109 [2024-05-14 02:19:58.596697] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.109 [2024-05-14 02:19:58.596982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.109 [2024-05-14 02:19:58.597028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.109 [2024-05-14 02:19:58.601227] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.109 [2024-05-14 02:19:58.601436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.109 [2024-05-14 02:19:58.601457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.109 [2024-05-14 02:19:58.606325] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.109 [2024-05-14 02:19:58.606463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.109 [2024-05-14 02:19:58.606483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.110 [2024-05-14 02:19:58.611106] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.110 [2024-05-14 02:19:58.611208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.110 [2024-05-14 02:19:58.611227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.110 [2024-05-14 02:19:58.616071] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.110 [2024-05-14 02:19:58.616203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.110 [2024-05-14 02:19:58.616222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.110 [2024-05-14 02:19:58.620822] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.110 [2024-05-14 02:19:58.621029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.110 [2024-05-14 02:19:58.621049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.110 [2024-05-14 02:19:58.625613] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.110 [2024-05-14 02:19:58.625752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.110 [2024-05-14 02:19:58.625773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.110 [2024-05-14 02:19:58.630169] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.110 [2024-05-14 02:19:58.630428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.110 [2024-05-14 02:19:58.630491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.110 [2024-05-14 02:19:58.635278] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.110 [2024-05-14 02:19:58.635506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.110 [2024-05-14 02:19:58.635560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.110 [2024-05-14 02:19:58.639876] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.110 [2024-05-14 02:19:58.640091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.110 [2024-05-14 02:19:58.640112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.110 [2024-05-14 02:19:58.644711] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.110 [2024-05-14 02:19:58.644906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.110 [2024-05-14 02:19:58.644939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.110 [2024-05-14 02:19:58.649616] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.110 [2024-05-14 02:19:58.649768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.110 [2024-05-14 02:19:58.649790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.110 [2024-05-14 02:19:58.654582] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.110 [2024-05-14 02:19:58.654721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.110 [2024-05-14 02:19:58.654742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.110 [2024-05-14 02:19:58.659302] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.110 [2024-05-14 02:19:58.659489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.110 [2024-05-14 02:19:58.659511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.110 [2024-05-14 02:19:58.664253] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.110 [2024-05-14 02:19:58.664440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.110 [2024-05-14 02:19:58.664476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.110 [2024-05-14 02:19:58.669352] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.110 [2024-05-14 02:19:58.669583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.110 [2024-05-14 02:19:58.669604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.110 [2024-05-14 02:19:58.674327] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.110 [2024-05-14 02:19:58.674646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.110 [2024-05-14 02:19:58.674685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.110 [2024-05-14 02:19:58.679306] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.110 [2024-05-14 02:19:58.679485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.110 [2024-05-14 02:19:58.679509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.110 [2024-05-14 02:19:58.683909] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.110 [2024-05-14 02:19:58.684038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.110 [2024-05-14 02:19:58.684060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.110 [2024-05-14 02:19:58.688607] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.110 [2024-05-14 02:19:58.688722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.110 [2024-05-14 02:19:58.688743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.110 [2024-05-14 02:19:58.693351] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.110 [2024-05-14 02:19:58.693447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.110 [2024-05-14 02:19:58.693469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.371 [2024-05-14 02:19:58.698414] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.371 [2024-05-14 02:19:58.698614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.371 [2024-05-14 02:19:58.698635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.371 [2024-05-14 02:19:58.703406] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.371 [2024-05-14 02:19:58.703549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.371 [2024-05-14 02:19:58.703570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.371 [2024-05-14 02:19:58.708608] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.371 [2024-05-14 02:19:58.708849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.371 [2024-05-14 02:19:58.708871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.371 [2024-05-14 02:19:58.713235] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.371 [2024-05-14 02:19:58.713447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.371 [2024-05-14 02:19:58.713467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.371 [2024-05-14 02:19:58.718051] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.371 [2024-05-14 02:19:58.718233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.371 [2024-05-14 02:19:58.718255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.371 [2024-05-14 02:19:58.722380] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.371 [2024-05-14 02:19:58.722550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.371 [2024-05-14 02:19:58.722571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.371 [2024-05-14 02:19:58.727418] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.371 [2024-05-14 02:19:58.727557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.371 [2024-05-14 02:19:58.727577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.371 [2024-05-14 02:19:58.732263] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.371 [2024-05-14 02:19:58.732427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.371 [2024-05-14 02:19:58.732446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.371 [2024-05-14 02:19:58.737322] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.371 [2024-05-14 02:19:58.737522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.372 [2024-05-14 02:19:58.737541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.372 [2024-05-14 02:19:58.741988] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.372 [2024-05-14 02:19:58.742119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.372 [2024-05-14 02:19:58.742140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.372 [2024-05-14 02:19:58.747061] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.372 [2024-05-14 02:19:58.747321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.372 [2024-05-14 02:19:58.747385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.372 [2024-05-14 02:19:58.751557] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.372 [2024-05-14 02:19:58.751779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.372 [2024-05-14 02:19:58.751799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.372 [2024-05-14 02:19:58.756638] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.372 [2024-05-14 02:19:58.756850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.372 [2024-05-14 02:19:58.756870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.372 [2024-05-14 02:19:58.761458] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.372 [2024-05-14 02:19:58.761631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.372 [2024-05-14 02:19:58.761652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.372 [2024-05-14 02:19:58.766498] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.372 [2024-05-14 02:19:58.766660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.372 [2024-05-14 02:19:58.766681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.372 [2024-05-14 02:19:58.771528] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.372 [2024-05-14 02:19:58.771661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.372 [2024-05-14 02:19:58.771681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.372 [2024-05-14 02:19:58.776302] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.372 [2024-05-14 02:19:58.776474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.372 [2024-05-14 02:19:58.776496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.372 [2024-05-14 02:19:58.781260] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.372 [2024-05-14 02:19:58.781397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.372 [2024-05-14 02:19:58.781416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.372 [2024-05-14 02:19:58.786442] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.372 [2024-05-14 02:19:58.786725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.372 [2024-05-14 02:19:58.786812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.372 [2024-05-14 02:19:58.791597] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.372 [2024-05-14 02:19:58.791868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.372 [2024-05-14 02:19:58.791905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.372 [2024-05-14 02:19:58.796500] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.372 [2024-05-14 02:19:58.796724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.372 [2024-05-14 02:19:58.796747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.372 [2024-05-14 02:19:58.801527] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.372 [2024-05-14 02:19:58.801664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.372 [2024-05-14 02:19:58.801685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.372 [2024-05-14 02:19:58.806199] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.372 [2024-05-14 02:19:58.806294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.372 [2024-05-14 02:19:58.806317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.372 [2024-05-14 02:19:58.811155] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.372 [2024-05-14 02:19:58.811289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.372 [2024-05-14 02:19:58.811309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.372 [2024-05-14 02:19:58.816082] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.372 [2024-05-14 02:19:58.816259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.372 [2024-05-14 02:19:58.816281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.372 [2024-05-14 02:19:58.820912] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.372 [2024-05-14 02:19:58.821038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.372 [2024-05-14 02:19:58.821074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.372 [2024-05-14 02:19:58.825862] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.372 [2024-05-14 02:19:58.826139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.372 [2024-05-14 02:19:58.826162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.372 [2024-05-14 02:19:58.831054] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.372 [2024-05-14 02:19:58.831316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.372 [2024-05-14 02:19:58.831353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.372 [2024-05-14 02:19:58.836078] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.372 [2024-05-14 02:19:58.836264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.372 [2024-05-14 02:19:58.836284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.372 [2024-05-14 02:19:58.840978] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.372 [2024-05-14 02:19:58.841079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.372 [2024-05-14 02:19:58.841102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.372 [2024-05-14 02:19:58.845519] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.372 [2024-05-14 02:19:58.845649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.372 [2024-05-14 02:19:58.845687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.372 [2024-05-14 02:19:58.850623] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.372 [2024-05-14 02:19:58.850750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.372 [2024-05-14 02:19:58.850772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.372 [2024-05-14 02:19:58.855628] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.372 [2024-05-14 02:19:58.855790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.372 [2024-05-14 02:19:58.855813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.372 [2024-05-14 02:19:58.860796] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.372 [2024-05-14 02:19:58.860932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.372 [2024-05-14 02:19:58.860953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.373 [2024-05-14 02:19:58.865558] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.373 [2024-05-14 02:19:58.865831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.373 [2024-05-14 02:19:58.865908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.373 [2024-05-14 02:19:58.870639] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.373 [2024-05-14 02:19:58.870946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.373 [2024-05-14 02:19:58.870994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.373 [2024-05-14 02:19:58.875317] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.373 [2024-05-14 02:19:58.875549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.373 [2024-05-14 02:19:58.875571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.373 [2024-05-14 02:19:58.880419] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.373 [2024-05-14 02:19:58.880575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.373 [2024-05-14 02:19:58.880596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.373 [2024-05-14 02:19:58.885452] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.373 [2024-05-14 02:19:58.885594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.373 [2024-05-14 02:19:58.885614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.373 [2024-05-14 02:19:58.890542] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.373 [2024-05-14 02:19:58.890670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.373 [2024-05-14 02:19:58.890692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.373 [2024-05-14 02:19:58.895172] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.373 [2024-05-14 02:19:58.895340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.373 [2024-05-14 02:19:58.895360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.373 [2024-05-14 02:19:58.900123] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.373 [2024-05-14 02:19:58.900277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.373 [2024-05-14 02:19:58.900298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.373 [2024-05-14 02:19:58.904897] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.373 [2024-05-14 02:19:58.905146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.373 [2024-05-14 02:19:58.905168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.373 [2024-05-14 02:19:58.909669] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.373 [2024-05-14 02:19:58.909960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.373 [2024-05-14 02:19:58.909982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.373 [2024-05-14 02:19:58.914554] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.373 [2024-05-14 02:19:58.914820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.373 [2024-05-14 02:19:58.914842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.373 [2024-05-14 02:19:58.919249] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.373 [2024-05-14 02:19:58.919347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.373 [2024-05-14 02:19:58.919370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.373 [2024-05-14 02:19:58.923912] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.373 [2024-05-14 02:19:58.924046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.373 [2024-05-14 02:19:58.924066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.373 [2024-05-14 02:19:58.928548] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.373 [2024-05-14 02:19:58.928665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.373 [2024-05-14 02:19:58.928701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.373 [2024-05-14 02:19:58.933557] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.373 [2024-05-14 02:19:58.933731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.373 [2024-05-14 02:19:58.933767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.373 [2024-05-14 02:19:58.938207] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.373 [2024-05-14 02:19:58.938334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.373 [2024-05-14 02:19:58.938356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.373 [2024-05-14 02:19:58.943442] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.373 [2024-05-14 02:19:58.943699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.373 [2024-05-14 02:19:58.943720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.373 [2024-05-14 02:19:58.948061] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.373 [2024-05-14 02:19:58.948279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.373 [2024-05-14 02:19:58.948298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.373 [2024-05-14 02:19:58.952848] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.373 [2024-05-14 02:19:58.953041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.373 [2024-05-14 02:19:58.953061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.373 [2024-05-14 02:19:58.957641] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.373 [2024-05-14 02:19:58.957787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.373 [2024-05-14 02:19:58.957809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.635 [2024-05-14 02:19:58.962431] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.635 [2024-05-14 02:19:58.962629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.635 [2024-05-14 02:19:58.962651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.635 [2024-05-14 02:19:58.967657] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.635 [2024-05-14 02:19:58.967777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.635 [2024-05-14 02:19:58.967798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.635 [2024-05-14 02:19:58.972576] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.635 [2024-05-14 02:19:58.972746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.635 [2024-05-14 02:19:58.972766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.635 [2024-05-14 02:19:58.977437] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.635 [2024-05-14 02:19:58.977569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.635 [2024-05-14 02:19:58.977590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.635 [2024-05-14 02:19:58.982561] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.635 [2024-05-14 02:19:58.982843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.635 [2024-05-14 02:19:58.982864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.635 [2024-05-14 02:19:58.987355] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.635 [2024-05-14 02:19:58.987619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.635 [2024-05-14 02:19:58.987670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.635 [2024-05-14 02:19:58.992156] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.635 [2024-05-14 02:19:58.992334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.635 [2024-05-14 02:19:58.992357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.635 [2024-05-14 02:19:58.996837] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.635 [2024-05-14 02:19:58.996944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.635 [2024-05-14 02:19:58.996966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.635 [2024-05-14 02:19:59.001461] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.635 [2024-05-14 02:19:59.001556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.635 [2024-05-14 02:19:59.001577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.635 [2024-05-14 02:19:59.006180] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.635 [2024-05-14 02:19:59.006291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.635 [2024-05-14 02:19:59.006327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.635 [2024-05-14 02:19:59.011283] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.635 [2024-05-14 02:19:59.011444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.635 [2024-05-14 02:19:59.011465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.635 [2024-05-14 02:19:59.016240] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.635 [2024-05-14 02:19:59.016376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.635 [2024-05-14 02:19:59.016396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.635 [2024-05-14 02:19:59.021467] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.635 [2024-05-14 02:19:59.021741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.635 [2024-05-14 02:19:59.021787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.635 [2024-05-14 02:19:59.026305] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.635 [2024-05-14 02:19:59.026591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.635 [2024-05-14 02:19:59.026624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.635 [2024-05-14 02:19:59.031135] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.635 [2024-05-14 02:19:59.031298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.635 [2024-05-14 02:19:59.031335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.635 [2024-05-14 02:19:59.036039] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.635 [2024-05-14 02:19:59.036141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.635 [2024-05-14 02:19:59.036160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.635 [2024-05-14 02:19:59.040784] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.635 [2024-05-14 02:19:59.040928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.635 [2024-05-14 02:19:59.040950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.635 [2024-05-14 02:19:59.045316] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.635 [2024-05-14 02:19:59.045426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.635 [2024-05-14 02:19:59.045446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.635 [2024-05-14 02:19:59.050244] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.635 [2024-05-14 02:19:59.050431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.635 [2024-05-14 02:19:59.050490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.635 [2024-05-14 02:19:59.055051] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.635 [2024-05-14 02:19:59.055200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.635 [2024-05-14 02:19:59.055251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.635 [2024-05-14 02:19:59.059816] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.635 [2024-05-14 02:19:59.060080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.635 [2024-05-14 02:19:59.060125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.635 [2024-05-14 02:19:59.064112] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.635 [2024-05-14 02:19:59.064398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.635 [2024-05-14 02:19:59.064436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.635 [2024-05-14 02:19:59.069572] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.635 [2024-05-14 02:19:59.069773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.635 [2024-05-14 02:19:59.069795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.635 [2024-05-14 02:19:59.075031] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.635 [2024-05-14 02:19:59.075131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.635 [2024-05-14 02:19:59.075152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.635 [2024-05-14 02:19:59.080378] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.635 [2024-05-14 02:19:59.080545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.635 [2024-05-14 02:19:59.080566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.635 [2024-05-14 02:19:59.085917] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.635 [2024-05-14 02:19:59.086055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.635 [2024-05-14 02:19:59.086078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.635 [2024-05-14 02:19:59.091424] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.635 [2024-05-14 02:19:59.091625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.635 [2024-05-14 02:19:59.091648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.635 [2024-05-14 02:19:59.096831] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.635 [2024-05-14 02:19:59.097020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.635 [2024-05-14 02:19:59.097043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.635 [2024-05-14 02:19:59.102095] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.635 [2024-05-14 02:19:59.102320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.635 [2024-05-14 02:19:59.102359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.635 [2024-05-14 02:19:59.107435] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.635 [2024-05-14 02:19:59.107697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.635 [2024-05-14 02:19:59.107736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.635 [2024-05-14 02:19:59.113001] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.635 [2024-05-14 02:19:59.113239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.635 [2024-05-14 02:19:59.113274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.635 [2024-05-14 02:19:59.118379] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.635 [2024-05-14 02:19:59.118542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.635 [2024-05-14 02:19:59.118564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.635 [2024-05-14 02:19:59.123368] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.635 [2024-05-14 02:19:59.123464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.635 [2024-05-14 02:19:59.123486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.635 [2024-05-14 02:19:59.128587] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.635 [2024-05-14 02:19:59.128727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.635 [2024-05-14 02:19:59.128749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.635 [2024-05-14 02:19:59.134038] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.636 [2024-05-14 02:19:59.134201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.636 [2024-05-14 02:19:59.134223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.636 [2024-05-14 02:19:59.139250] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.636 [2024-05-14 02:19:59.139367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.636 [2024-05-14 02:19:59.139389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.636 [2024-05-14 02:19:59.144978] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.636 [2024-05-14 02:19:59.145223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.636 [2024-05-14 02:19:59.145271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.636 [2024-05-14 02:19:59.150459] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.636 [2024-05-14 02:19:59.150743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.636 [2024-05-14 02:19:59.150792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.636 [2024-05-14 02:19:59.155773] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.636 [2024-05-14 02:19:59.156000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.636 [2024-05-14 02:19:59.156031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.636 [2024-05-14 02:19:59.160959] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.636 [2024-05-14 02:19:59.161124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.636 [2024-05-14 02:19:59.161160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.636 [2024-05-14 02:19:59.166419] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.636 [2024-05-14 02:19:59.166544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.636 [2024-05-14 02:19:59.166581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.636 [2024-05-14 02:19:59.171184] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.636 [2024-05-14 02:19:59.171306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.636 [2024-05-14 02:19:59.171326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.636 [2024-05-14 02:19:59.175635] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.636 [2024-05-14 02:19:59.175838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.636 [2024-05-14 02:19:59.175877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.636 [2024-05-14 02:19:59.180260] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.636 [2024-05-14 02:19:59.180437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.636 [2024-05-14 02:19:59.180474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.636 [2024-05-14 02:19:59.185046] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.636 [2024-05-14 02:19:59.185320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.636 [2024-05-14 02:19:59.185352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.636 [2024-05-14 02:19:59.189856] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.636 [2024-05-14 02:19:59.190163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.636 [2024-05-14 02:19:59.190202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.636 [2024-05-14 02:19:59.194097] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.636 [2024-05-14 02:19:59.194201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.636 [2024-05-14 02:19:59.194222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.636 [2024-05-14 02:19:59.198481] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.636 [2024-05-14 02:19:59.198628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.636 [2024-05-14 02:19:59.198681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.636 [2024-05-14 02:19:59.203225] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.636 [2024-05-14 02:19:59.203334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.636 [2024-05-14 02:19:59.203353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.636 [2024-05-14 02:19:59.207498] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.636 [2024-05-14 02:19:59.207603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.636 [2024-05-14 02:19:59.207622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.636 [2024-05-14 02:19:59.212121] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.636 [2024-05-14 02:19:59.212280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.636 [2024-05-14 02:19:59.212333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.636 [2024-05-14 02:19:59.216458] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.636 [2024-05-14 02:19:59.216614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.636 [2024-05-14 02:19:59.216634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.636 [2024-05-14 02:19:59.221355] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.636 [2024-05-14 02:19:59.221624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.636 [2024-05-14 02:19:59.221663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.898 [2024-05-14 02:19:59.225855] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.898 [2024-05-14 02:19:59.226109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.898 [2024-05-14 02:19:59.226144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.898 [2024-05-14 02:19:59.231073] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.898 [2024-05-14 02:19:59.231290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.898 [2024-05-14 02:19:59.231333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.898 [2024-05-14 02:19:59.235513] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.898 [2024-05-14 02:19:59.235622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.898 [2024-05-14 02:19:59.235642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.898 [2024-05-14 02:19:59.239664] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.898 [2024-05-14 02:19:59.239764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.898 [2024-05-14 02:19:59.239796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.898 [2024-05-14 02:19:59.243861] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.898 [2024-05-14 02:19:59.243979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.898 [2024-05-14 02:19:59.243998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.898 [2024-05-14 02:19:59.248161] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.898 [2024-05-14 02:19:59.248329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.898 [2024-05-14 02:19:59.248381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.898 [2024-05-14 02:19:59.252711] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.898 [2024-05-14 02:19:59.252849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.898 [2024-05-14 02:19:59.252900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.898 [2024-05-14 02:19:59.257277] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.898 [2024-05-14 02:19:59.257546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.898 [2024-05-14 02:19:59.257584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.898 [2024-05-14 02:19:59.262059] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.898 [2024-05-14 02:19:59.262258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.898 [2024-05-14 02:19:59.262293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.898 [2024-05-14 02:19:59.266208] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.898 [2024-05-14 02:19:59.266495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.898 [2024-05-14 02:19:59.266542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.898 [2024-05-14 02:19:59.270556] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.898 [2024-05-14 02:19:59.270686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.898 [2024-05-14 02:19:59.270705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.898 [2024-05-14 02:19:59.274844] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.898 [2024-05-14 02:19:59.275002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.898 [2024-05-14 02:19:59.275021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.898 [2024-05-14 02:19:59.279082] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.898 [2024-05-14 02:19:59.279182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.898 [2024-05-14 02:19:59.279201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.898 [2024-05-14 02:19:59.283375] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.898 [2024-05-14 02:19:59.283559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.898 [2024-05-14 02:19:59.283579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.898 [2024-05-14 02:19:59.288357] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.898 [2024-05-14 02:19:59.288502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.898 [2024-05-14 02:19:59.288523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.898 [2024-05-14 02:19:59.293152] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.898 [2024-05-14 02:19:59.293450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.898 [2024-05-14 02:19:59.293489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.898 [2024-05-14 02:19:59.297390] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.898 [2024-05-14 02:19:59.297593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.898 [2024-05-14 02:19:59.297628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.898 [2024-05-14 02:19:59.301747] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.898 [2024-05-14 02:19:59.302066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.898 [2024-05-14 02:19:59.302090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.898 [2024-05-14 02:19:59.306248] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.898 [2024-05-14 02:19:59.306396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.898 [2024-05-14 02:19:59.306415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.898 [2024-05-14 02:19:59.310584] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.898 [2024-05-14 02:19:59.310696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.898 [2024-05-14 02:19:59.310715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.898 [2024-05-14 02:19:59.314830] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.898 [2024-05-14 02:19:59.314970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.898 [2024-05-14 02:19:59.314989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.898 [2024-05-14 02:19:59.319079] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.898 [2024-05-14 02:19:59.319236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.898 [2024-05-14 02:19:59.319255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.898 [2024-05-14 02:19:59.323307] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.898 [2024-05-14 02:19:59.323444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.898 [2024-05-14 02:19:59.323463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.898 [2024-05-14 02:19:59.327628] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.898 [2024-05-14 02:19:59.327861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.898 [2024-05-14 02:19:59.327881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.898 [2024-05-14 02:19:59.331921] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.898 [2024-05-14 02:19:59.332121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.898 [2024-05-14 02:19:59.332141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.898 [2024-05-14 02:19:59.336179] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.898 [2024-05-14 02:19:59.336367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.898 [2024-05-14 02:19:59.336387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.898 [2024-05-14 02:19:59.340429] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.899 [2024-05-14 02:19:59.340549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.899 [2024-05-14 02:19:59.340568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.899 [2024-05-14 02:19:59.345158] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.899 [2024-05-14 02:19:59.345275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.899 [2024-05-14 02:19:59.345293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.899 [2024-05-14 02:19:59.349230] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.899 [2024-05-14 02:19:59.349349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.899 [2024-05-14 02:19:59.349369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.899 [2024-05-14 02:19:59.353358] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.899 [2024-05-14 02:19:59.353526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.899 [2024-05-14 02:19:59.353545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.899 [2024-05-14 02:19:59.357656] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.899 [2024-05-14 02:19:59.357793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.899 [2024-05-14 02:19:59.357813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.899 [2024-05-14 02:19:59.362060] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.899 [2024-05-14 02:19:59.362341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.899 [2024-05-14 02:19:59.362392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.899 [2024-05-14 02:19:59.366645] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.899 [2024-05-14 02:19:59.366874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.899 [2024-05-14 02:19:59.366894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.899 [2024-05-14 02:19:59.371025] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.899 [2024-05-14 02:19:59.371182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.899 [2024-05-14 02:19:59.371202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.899 [2024-05-14 02:19:59.375367] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.899 [2024-05-14 02:19:59.375507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.899 [2024-05-14 02:19:59.375527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.899 [2024-05-14 02:19:59.379797] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.899 [2024-05-14 02:19:59.379921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.899 [2024-05-14 02:19:59.379941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.899 [2024-05-14 02:19:59.384049] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.899 [2024-05-14 02:19:59.384161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.899 [2024-05-14 02:19:59.384181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.899 [2024-05-14 02:19:59.388336] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.899 [2024-05-14 02:19:59.388501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.899 [2024-05-14 02:19:59.388520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.899 [2024-05-14 02:19:59.392625] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.899 [2024-05-14 02:19:59.392803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.899 [2024-05-14 02:19:59.392823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.899 [2024-05-14 02:19:59.396942] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.899 [2024-05-14 02:19:59.397172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.899 [2024-05-14 02:19:59.397201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.899 [2024-05-14 02:19:59.401211] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.899 [2024-05-14 02:19:59.401434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.899 [2024-05-14 02:19:59.401453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.899 [2024-05-14 02:19:59.405358] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.899 [2024-05-14 02:19:59.405544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.899 [2024-05-14 02:19:59.405563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.899 [2024-05-14 02:19:59.409609] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.899 [2024-05-14 02:19:59.409752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.899 [2024-05-14 02:19:59.409771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.899 [2024-05-14 02:19:59.413877] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.899 [2024-05-14 02:19:59.414002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.899 [2024-05-14 02:19:59.414024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.899 [2024-05-14 02:19:59.417893] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.899 [2024-05-14 02:19:59.418038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.899 [2024-05-14 02:19:59.418060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.899 [2024-05-14 02:19:59.422159] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.899 [2024-05-14 02:19:59.422352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.899 [2024-05-14 02:19:59.422371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.899 [2024-05-14 02:19:59.426413] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.899 [2024-05-14 02:19:59.426538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.899 [2024-05-14 02:19:59.426557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.899 [2024-05-14 02:19:59.430829] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.899 [2024-05-14 02:19:59.431068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.899 [2024-05-14 02:19:59.431103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.899 [2024-05-14 02:19:59.435391] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.899 [2024-05-14 02:19:59.435596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.899 [2024-05-14 02:19:59.435615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.899 [2024-05-14 02:19:59.439918] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.899 [2024-05-14 02:19:59.440110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.899 [2024-05-14 02:19:59.440129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.899 [2024-05-14 02:19:59.444531] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.899 [2024-05-14 02:19:59.444661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.899 [2024-05-14 02:19:59.444680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.899 [2024-05-14 02:19:59.449288] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.899 [2024-05-14 02:19:59.449417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.899 [2024-05-14 02:19:59.449438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.899 [2024-05-14 02:19:59.453523] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.899 [2024-05-14 02:19:59.453636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.899 [2024-05-14 02:19:59.453656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.900 [2024-05-14 02:19:59.457829] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.900 [2024-05-14 02:19:59.458050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.900 [2024-05-14 02:19:59.458073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.900 [2024-05-14 02:19:59.461968] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.900 [2024-05-14 02:19:59.462109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.900 [2024-05-14 02:19:59.462131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.900 [2024-05-14 02:19:59.466310] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.900 [2024-05-14 02:19:59.466569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.900 [2024-05-14 02:19:59.466621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.900 [2024-05-14 02:19:59.471092] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.900 [2024-05-14 02:19:59.471392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.900 [2024-05-14 02:19:59.471431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.900 [2024-05-14 02:19:59.475626] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.900 [2024-05-14 02:19:59.475805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.900 [2024-05-14 02:19:59.475827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.900 [2024-05-14 02:19:59.480027] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:44.900 [2024-05-14 02:19:59.480134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.900 [2024-05-14 02:19:59.480155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.160 [2024-05-14 02:19:59.484639] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.160 [2024-05-14 02:19:59.484794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.160 [2024-05-14 02:19:59.484815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.160 [2024-05-14 02:19:59.488941] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.160 [2024-05-14 02:19:59.489040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.160 [2024-05-14 02:19:59.489059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.160 [2024-05-14 02:19:59.493461] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.160 [2024-05-14 02:19:59.493630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.161 [2024-05-14 02:19:59.493662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.161 [2024-05-14 02:19:59.497548] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.161 [2024-05-14 02:19:59.497667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.161 [2024-05-14 02:19:59.497718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.161 [2024-05-14 02:19:59.502477] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.161 [2024-05-14 02:19:59.502708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.161 [2024-05-14 02:19:59.502748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.161 [2024-05-14 02:19:59.507094] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.161 [2024-05-14 02:19:59.507297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.161 [2024-05-14 02:19:59.507330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.161 [2024-05-14 02:19:59.511796] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.161 [2024-05-14 02:19:59.511973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.161 [2024-05-14 02:19:59.511995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.161 [2024-05-14 02:19:59.515970] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.161 [2024-05-14 02:19:59.516079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.161 [2024-05-14 02:19:59.516098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.161 [2024-05-14 02:19:59.520066] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.161 [2024-05-14 02:19:59.520181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.161 [2024-05-14 02:19:59.520200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.161 [2024-05-14 02:19:59.524145] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.161 [2024-05-14 02:19:59.524241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.161 [2024-05-14 02:19:59.524260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.161 [2024-05-14 02:19:59.528307] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.161 [2024-05-14 02:19:59.528457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.161 [2024-05-14 02:19:59.528475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.161 [2024-05-14 02:19:59.532387] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.161 [2024-05-14 02:19:59.532513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.161 [2024-05-14 02:19:59.532532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.161 [2024-05-14 02:19:59.536665] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.161 [2024-05-14 02:19:59.536929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.161 [2024-05-14 02:19:59.536962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.161 [2024-05-14 02:19:59.540759] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.161 [2024-05-14 02:19:59.541024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.161 [2024-05-14 02:19:59.541054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.161 [2024-05-14 02:19:59.544894] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.161 [2024-05-14 02:19:59.545043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.161 [2024-05-14 02:19:59.545093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.161 [2024-05-14 02:19:59.548970] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.161 [2024-05-14 02:19:59.549074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.161 [2024-05-14 02:19:59.549092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.161 [2024-05-14 02:19:59.553114] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.161 [2024-05-14 02:19:59.553209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.161 [2024-05-14 02:19:59.553228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.161 [2024-05-14 02:19:59.557175] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.161 [2024-05-14 02:19:59.557286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.161 [2024-05-14 02:19:59.557304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.161 [2024-05-14 02:19:59.561289] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.161 [2024-05-14 02:19:59.561438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.161 [2024-05-14 02:19:59.561488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.161 [2024-05-14 02:19:59.565351] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.161 [2024-05-14 02:19:59.565497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.161 [2024-05-14 02:19:59.565515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.161 [2024-05-14 02:19:59.569539] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.161 [2024-05-14 02:19:59.569790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.161 [2024-05-14 02:19:59.569845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.161 [2024-05-14 02:19:59.573668] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.161 [2024-05-14 02:19:59.573880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.161 [2024-05-14 02:19:59.573916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.161 [2024-05-14 02:19:59.577730] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.161 [2024-05-14 02:19:59.577967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.161 [2024-05-14 02:19:59.577996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.161 [2024-05-14 02:19:59.581715] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.161 [2024-05-14 02:19:59.581830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.161 [2024-05-14 02:19:59.581849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.161 [2024-05-14 02:19:59.585916] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.161 [2024-05-14 02:19:59.586039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.161 [2024-05-14 02:19:59.586059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.161 [2024-05-14 02:19:59.589989] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.161 [2024-05-14 02:19:59.590078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.161 [2024-05-14 02:19:59.590099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.161 [2024-05-14 02:19:59.594033] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.161 [2024-05-14 02:19:59.594192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.161 [2024-05-14 02:19:59.594214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.161 [2024-05-14 02:19:59.598074] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.161 [2024-05-14 02:19:59.598182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.161 [2024-05-14 02:19:59.598202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.161 [2024-05-14 02:19:59.602318] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.161 [2024-05-14 02:19:59.602588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.161 [2024-05-14 02:19:59.602619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.161 [2024-05-14 02:19:59.606470] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.161 [2024-05-14 02:19:59.606730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.161 [2024-05-14 02:19:59.606776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.162 [2024-05-14 02:19:59.610657] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.162 [2024-05-14 02:19:59.610855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.162 [2024-05-14 02:19:59.610875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.162 [2024-05-14 02:19:59.615024] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.162 [2024-05-14 02:19:59.615143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.162 [2024-05-14 02:19:59.615162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.162 [2024-05-14 02:19:59.619076] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.162 [2024-05-14 02:19:59.619174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.162 [2024-05-14 02:19:59.619193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.162 [2024-05-14 02:19:59.623186] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.162 [2024-05-14 02:19:59.623279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.162 [2024-05-14 02:19:59.623298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.162 [2024-05-14 02:19:59.627289] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.162 [2024-05-14 02:19:59.627491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.162 [2024-05-14 02:19:59.627521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.162 [2024-05-14 02:19:59.631433] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.162 [2024-05-14 02:19:59.631554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.162 [2024-05-14 02:19:59.631572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.162 [2024-05-14 02:19:59.635737] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.162 [2024-05-14 02:19:59.636012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.162 [2024-05-14 02:19:59.636042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.162 [2024-05-14 02:19:59.639836] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.162 [2024-05-14 02:19:59.640066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.162 [2024-05-14 02:19:59.640096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.162 [2024-05-14 02:19:59.643947] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.162 [2024-05-14 02:19:59.644119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.162 [2024-05-14 02:19:59.644172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.162 [2024-05-14 02:19:59.647974] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.162 [2024-05-14 02:19:59.648071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.162 [2024-05-14 02:19:59.648090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.162 [2024-05-14 02:19:59.652029] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.162 [2024-05-14 02:19:59.652144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.162 [2024-05-14 02:19:59.652163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.162 [2024-05-14 02:19:59.656059] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.162 [2024-05-14 02:19:59.656161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.162 [2024-05-14 02:19:59.656179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.162 [2024-05-14 02:19:59.660219] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.162 [2024-05-14 02:19:59.660401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.162 [2024-05-14 02:19:59.660447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.162 [2024-05-14 02:19:59.664350] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.162 [2024-05-14 02:19:59.664474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.162 [2024-05-14 02:19:59.664493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.162 [2024-05-14 02:19:59.668584] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.162 [2024-05-14 02:19:59.668853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.162 [2024-05-14 02:19:59.668880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.162 [2024-05-14 02:19:59.672779] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.162 [2024-05-14 02:19:59.673040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.162 [2024-05-14 02:19:59.673074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.162 [2024-05-14 02:19:59.676976] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.162 [2024-05-14 02:19:59.677200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.162 [2024-05-14 02:19:59.677230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.162 [2024-05-14 02:19:59.681043] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.162 [2024-05-14 02:19:59.681148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.162 [2024-05-14 02:19:59.681167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.162 [2024-05-14 02:19:59.685222] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.162 [2024-05-14 02:19:59.685338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.162 [2024-05-14 02:19:59.685357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.162 [2024-05-14 02:19:59.689241] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.162 [2024-05-14 02:19:59.689358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.162 [2024-05-14 02:19:59.689377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.162 [2024-05-14 02:19:59.693360] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.162 [2024-05-14 02:19:59.693511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.162 [2024-05-14 02:19:59.693530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.162 [2024-05-14 02:19:59.697381] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.162 [2024-05-14 02:19:59.697524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.162 [2024-05-14 02:19:59.697543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.162 [2024-05-14 02:19:59.701717] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.162 [2024-05-14 02:19:59.702008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.162 [2024-05-14 02:19:59.702045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.162 [2024-05-14 02:19:59.706407] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.162 [2024-05-14 02:19:59.706653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.162 [2024-05-14 02:19:59.706684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.162 [2024-05-14 02:19:59.711054] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.162 [2024-05-14 02:19:59.711223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.162 [2024-05-14 02:19:59.711271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.162 [2024-05-14 02:19:59.715212] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.162 [2024-05-14 02:19:59.715309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.162 [2024-05-14 02:19:59.715328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.162 [2024-05-14 02:19:59.719276] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.162 [2024-05-14 02:19:59.719390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.162 [2024-05-14 02:19:59.719409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.163 [2024-05-14 02:19:59.723423] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.163 [2024-05-14 02:19:59.723533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.163 [2024-05-14 02:19:59.723551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.163 [2024-05-14 02:19:59.727614] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.163 [2024-05-14 02:19:59.727762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.163 [2024-05-14 02:19:59.727797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.163 [2024-05-14 02:19:59.731680] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.163 [2024-05-14 02:19:59.731806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.163 [2024-05-14 02:19:59.731841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.163 [2024-05-14 02:19:59.735978] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.163 [2024-05-14 02:19:59.736209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.163 [2024-05-14 02:19:59.736266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.163 [2024-05-14 02:19:59.740102] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.163 [2024-05-14 02:19:59.740301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.163 [2024-05-14 02:19:59.740352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.163 [2024-05-14 02:19:59.744481] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.163 [2024-05-14 02:19:59.744684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.163 [2024-05-14 02:19:59.744722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.424 [2024-05-14 02:19:59.749046] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.424 [2024-05-14 02:19:59.749157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.424 [2024-05-14 02:19:59.749176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.424 [2024-05-14 02:19:59.753196] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.424 [2024-05-14 02:19:59.753315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.424 [2024-05-14 02:19:59.753336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.424 [2024-05-14 02:19:59.757482] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.424 [2024-05-14 02:19:59.757594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.424 [2024-05-14 02:19:59.757645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.424 [2024-05-14 02:19:59.761622] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.424 [2024-05-14 02:19:59.761784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.424 [2024-05-14 02:19:59.761848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.424 [2024-05-14 02:19:59.765725] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.424 [2024-05-14 02:19:59.765987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.424 [2024-05-14 02:19:59.766018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.424 [2024-05-14 02:19:59.769898] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.424 [2024-05-14 02:19:59.770164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.424 [2024-05-14 02:19:59.770203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.424 [2024-05-14 02:19:59.774114] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.424 [2024-05-14 02:19:59.774375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.424 [2024-05-14 02:19:59.774409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.424 [2024-05-14 02:19:59.778141] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.424 [2024-05-14 02:19:59.778372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.424 [2024-05-14 02:19:59.778407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.424 [2024-05-14 02:19:59.782213] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.424 [2024-05-14 02:19:59.782340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.424 [2024-05-14 02:19:59.782375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.424 [2024-05-14 02:19:59.786687] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.424 [2024-05-14 02:19:59.786821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.424 [2024-05-14 02:19:59.786873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.424 [2024-05-14 02:19:59.791199] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.424 [2024-05-14 02:19:59.791310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.424 [2024-05-14 02:19:59.791330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.424 [2024-05-14 02:19:59.795997] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.424 [2024-05-14 02:19:59.796210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.424 [2024-05-14 02:19:59.796243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.424 [2024-05-14 02:19:59.800986] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.424 [2024-05-14 02:19:59.801133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.424 [2024-05-14 02:19:59.801180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.424 [2024-05-14 02:19:59.806170] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.424 [2024-05-14 02:19:59.806480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.424 [2024-05-14 02:19:59.806515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.424 [2024-05-14 02:19:59.810840] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.424 [2024-05-14 02:19:59.811092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.424 [2024-05-14 02:19:59.811150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.424 [2024-05-14 02:19:59.815498] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.424 [2024-05-14 02:19:59.815686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.424 [2024-05-14 02:19:59.815705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.424 [2024-05-14 02:19:59.820226] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.424 [2024-05-14 02:19:59.820349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.424 [2024-05-14 02:19:59.820368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.424 [2024-05-14 02:19:59.824727] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.424 [2024-05-14 02:19:59.824880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.424 [2024-05-14 02:19:59.824901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.424 [2024-05-14 02:19:59.829160] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.424 [2024-05-14 02:19:59.829330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.424 [2024-05-14 02:19:59.829351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.424 [2024-05-14 02:19:59.833559] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.424 [2024-05-14 02:19:59.833727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.424 [2024-05-14 02:19:59.833746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.424 [2024-05-14 02:19:59.838400] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.424 [2024-05-14 02:19:59.838566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.424 [2024-05-14 02:19:59.838586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.424 [2024-05-14 02:19:59.843265] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.424 [2024-05-14 02:19:59.843506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.424 [2024-05-14 02:19:59.843547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.424 [2024-05-14 02:19:59.848041] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.424 [2024-05-14 02:19:59.848268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.424 [2024-05-14 02:19:59.848287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.424 [2024-05-14 02:19:59.852974] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.424 [2024-05-14 02:19:59.853162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.424 [2024-05-14 02:19:59.853182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.424 [2024-05-14 02:19:59.857735] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.424 [2024-05-14 02:19:59.857863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.424 [2024-05-14 02:19:59.857899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.424 [2024-05-14 02:19:59.862349] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.424 [2024-05-14 02:19:59.862480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.424 [2024-05-14 02:19:59.862499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.424 [2024-05-14 02:19:59.866975] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.424 [2024-05-14 02:19:59.867121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.424 [2024-05-14 02:19:59.867140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.425 [2024-05-14 02:19:59.871628] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.425 [2024-05-14 02:19:59.871840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.425 [2024-05-14 02:19:59.871861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.425 [2024-05-14 02:19:59.876359] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.425 [2024-05-14 02:19:59.876519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.425 [2024-05-14 02:19:59.876538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.425 [2024-05-14 02:19:59.881215] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.425 [2024-05-14 02:19:59.881437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.425 [2024-05-14 02:19:59.881457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.425 [2024-05-14 02:19:59.885374] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.425 [2024-05-14 02:19:59.885600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.425 [2024-05-14 02:19:59.885619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.425 [2024-05-14 02:19:59.889511] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.425 [2024-05-14 02:19:59.889684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.425 [2024-05-14 02:19:59.889704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.425 [2024-05-14 02:19:59.893735] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.425 [2024-05-14 02:19:59.893896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.425 [2024-05-14 02:19:59.893918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.425 [2024-05-14 02:19:59.897871] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.425 [2024-05-14 02:19:59.898032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.425 [2024-05-14 02:19:59.898054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.425 [2024-05-14 02:19:59.902115] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.425 [2024-05-14 02:19:59.902204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.425 [2024-05-14 02:19:59.902226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.425 [2024-05-14 02:19:59.906425] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.425 [2024-05-14 02:19:59.906593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.425 [2024-05-14 02:19:59.906612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.425 [2024-05-14 02:19:59.910600] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.425 [2024-05-14 02:19:59.910749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.425 [2024-05-14 02:19:59.910768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.425 [2024-05-14 02:19:59.914885] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.425 [2024-05-14 02:19:59.915108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.425 [2024-05-14 02:19:59.915127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.425 [2024-05-14 02:19:59.919109] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.425 [2024-05-14 02:19:59.919319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.425 [2024-05-14 02:19:59.919339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.425 [2024-05-14 02:19:59.923257] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.425 [2024-05-14 02:19:59.923433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.425 [2024-05-14 02:19:59.923451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.425 [2024-05-14 02:19:59.927387] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.425 [2024-05-14 02:19:59.927516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.425 [2024-05-14 02:19:59.927535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.425 [2024-05-14 02:19:59.931569] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.425 [2024-05-14 02:19:59.931702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.425 [2024-05-14 02:19:59.931721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.425 [2024-05-14 02:19:59.935947] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.425 [2024-05-14 02:19:59.936061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.425 [2024-05-14 02:19:59.936081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.425 [2024-05-14 02:19:59.940017] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.425 [2024-05-14 02:19:59.940177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.425 [2024-05-14 02:19:59.940196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.425 [2024-05-14 02:19:59.944076] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.425 [2024-05-14 02:19:59.944222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.425 [2024-05-14 02:19:59.944241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.425 [2024-05-14 02:19:59.948254] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.425 [2024-05-14 02:19:59.948481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.425 [2024-05-14 02:19:59.948500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.425 [2024-05-14 02:19:59.952386] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.425 [2024-05-14 02:19:59.952603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.425 [2024-05-14 02:19:59.952623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.425 [2024-05-14 02:19:59.956582] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.425 [2024-05-14 02:19:59.956775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.425 [2024-05-14 02:19:59.956795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.425 [2024-05-14 02:19:59.960854] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.425 [2024-05-14 02:19:59.960999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.425 [2024-05-14 02:19:59.961019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.425 [2024-05-14 02:19:59.965228] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa9fbb0) with pdu=0x2000190fef90 00:22:45.425 [2024-05-14 02:19:59.965343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.425 [2024-05-14 02:19:59.965363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.425 00:22:45.425 Latency(us) 00:22:45.425 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:45.425 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:22:45.425 nvme0n1 : 2.00 6627.25 828.41 0.00 0.00 2408.67 1750.11 10068.71 00:22:45.425 =================================================================================================================== 00:22:45.425 Total : 6627.25 828.41 0.00 0.00 2408.67 1750.11 10068.71 00:22:45.425 0 00:22:45.425 02:19:59 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:45.425 02:19:59 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:45.425 02:19:59 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:45.425 02:19:59 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:45.425 | .driver_specific 00:22:45.425 | .nvme_error 00:22:45.425 | .status_code 00:22:45.425 | .command_transient_transport_error' 00:22:45.683 02:20:00 -- host/digest.sh@71 -- # (( 428 > 0 )) 00:22:45.683 02:20:00 -- host/digest.sh@73 -- # killprocess 85083 00:22:45.683 02:20:00 -- common/autotest_common.sh@926 -- # '[' -z 85083 ']' 00:22:45.683 02:20:00 -- common/autotest_common.sh@930 -- # kill -0 85083 00:22:45.683 02:20:00 -- common/autotest_common.sh@931 -- # uname 00:22:45.683 02:20:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:45.683 02:20:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 85083 00:22:45.940 02:20:00 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:45.940 02:20:00 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:45.940 killing process with pid 85083 00:22:45.940 02:20:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 85083' 00:22:45.940 02:20:00 -- common/autotest_common.sh@945 -- # kill 85083 00:22:45.940 Received shutdown signal, test time was about 2.000000 seconds 00:22:45.940 00:22:45.940 Latency(us) 00:22:45.940 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:45.940 =================================================================================================================== 00:22:45.940 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:45.940 02:20:00 -- common/autotest_common.sh@950 -- # wait 85083 00:22:45.940 02:20:00 -- host/digest.sh@115 -- # killprocess 84786 00:22:45.940 02:20:00 -- common/autotest_common.sh@926 -- # '[' -z 84786 ']' 00:22:45.940 02:20:00 -- common/autotest_common.sh@930 -- # kill -0 84786 00:22:45.940 02:20:00 -- common/autotest_common.sh@931 -- # uname 00:22:45.940 02:20:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:45.940 02:20:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 84786 00:22:45.940 02:20:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:45.940 02:20:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:45.941 killing process with pid 84786 00:22:45.941 02:20:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 84786' 00:22:45.941 02:20:00 -- common/autotest_common.sh@945 -- # kill 84786 00:22:45.941 02:20:00 -- common/autotest_common.sh@950 -- # wait 84786 00:22:46.199 00:22:46.199 real 0m18.176s 00:22:46.199 user 0m35.780s 00:22:46.199 sys 0m4.631s 00:22:46.199 02:20:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:46.199 02:20:00 -- common/autotest_common.sh@10 -- # set +x 00:22:46.199 ************************************ 00:22:46.199 END TEST nvmf_digest_error 00:22:46.199 ************************************ 00:22:46.199 02:20:00 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:22:46.199 02:20:00 -- host/digest.sh@139 -- # nvmftestfini 00:22:46.199 02:20:00 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:46.199 02:20:00 -- nvmf/common.sh@116 -- # sync 00:22:46.199 02:20:00 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:46.199 02:20:00 -- nvmf/common.sh@119 -- # set +e 00:22:46.199 02:20:00 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:46.199 02:20:00 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:46.199 rmmod nvme_tcp 00:22:46.199 rmmod nvme_fabrics 00:22:46.199 rmmod nvme_keyring 00:22:46.459 02:20:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:46.459 02:20:00 -- nvmf/common.sh@123 -- # set -e 00:22:46.459 02:20:00 -- nvmf/common.sh@124 -- # return 0 00:22:46.459 02:20:00 -- nvmf/common.sh@477 -- # '[' -n 84786 ']' 00:22:46.459 02:20:00 -- nvmf/common.sh@478 -- # killprocess 84786 00:22:46.459 02:20:00 -- common/autotest_common.sh@926 -- # '[' -z 84786 ']' 00:22:46.459 02:20:00 -- common/autotest_common.sh@930 -- # kill -0 84786 00:22:46.459 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (84786) - No such process 00:22:46.459 Process with pid 84786 is not found 00:22:46.459 02:20:00 -- common/autotest_common.sh@953 -- # echo 'Process with pid 84786 is not found' 00:22:46.459 02:20:00 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:46.459 02:20:00 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:46.459 02:20:00 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:46.459 02:20:00 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:46.459 02:20:00 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:46.459 02:20:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:46.459 02:20:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:46.459 02:20:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:46.459 02:20:00 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:46.459 00:22:46.459 real 0m36.233s 00:22:46.459 user 1m9.110s 00:22:46.459 sys 0m9.327s 00:22:46.459 02:20:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:46.459 ************************************ 00:22:46.459 END TEST nvmf_digest 00:22:46.459 02:20:00 -- common/autotest_common.sh@10 -- # set +x 00:22:46.459 ************************************ 00:22:46.459 02:20:00 -- nvmf/nvmf.sh@109 -- # [[ 1 -eq 1 ]] 00:22:46.459 02:20:00 -- nvmf/nvmf.sh@109 -- # [[ tcp == \t\c\p ]] 00:22:46.459 02:20:00 -- nvmf/nvmf.sh@111 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:22:46.459 02:20:00 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:46.459 02:20:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:46.459 02:20:00 -- common/autotest_common.sh@10 -- # set +x 00:22:46.459 ************************************ 00:22:46.459 START TEST nvmf_mdns_discovery 00:22:46.459 ************************************ 00:22:46.459 02:20:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:22:46.459 * Looking for test storage... 00:22:46.459 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:46.459 02:20:00 -- host/mdns_discovery.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:46.459 02:20:00 -- nvmf/common.sh@7 -- # uname -s 00:22:46.459 02:20:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:46.459 02:20:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:46.459 02:20:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:46.459 02:20:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:46.459 02:20:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:46.459 02:20:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:46.459 02:20:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:46.459 02:20:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:46.459 02:20:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:46.459 02:20:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:46.459 02:20:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:22:46.459 02:20:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:22:46.459 02:20:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:46.459 02:20:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:46.459 02:20:00 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:46.459 02:20:00 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:46.459 02:20:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:46.459 02:20:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:46.459 02:20:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:46.459 02:20:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.459 02:20:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.459 02:20:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.459 02:20:00 -- paths/export.sh@5 -- # export PATH 00:22:46.459 02:20:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.459 02:20:00 -- nvmf/common.sh@46 -- # : 0 00:22:46.459 02:20:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:46.459 02:20:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:46.459 02:20:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:46.459 02:20:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:46.459 02:20:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:46.459 02:20:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:46.459 02:20:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:46.459 02:20:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:46.459 02:20:00 -- host/mdns_discovery.sh@12 -- # DISCOVERY_FILTER=address 00:22:46.459 02:20:00 -- host/mdns_discovery.sh@13 -- # DISCOVERY_PORT=8009 00:22:46.459 02:20:00 -- host/mdns_discovery.sh@14 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:22:46.459 02:20:00 -- host/mdns_discovery.sh@17 -- # NQN=nqn.2016-06.io.spdk:cnode 00:22:46.459 02:20:00 -- host/mdns_discovery.sh@18 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:22:46.459 02:20:00 -- host/mdns_discovery.sh@20 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:22:46.459 02:20:00 -- host/mdns_discovery.sh@21 -- # HOST_SOCK=/tmp/host.sock 00:22:46.459 02:20:00 -- host/mdns_discovery.sh@23 -- # nvmftestinit 00:22:46.459 02:20:00 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:46.459 02:20:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:46.460 02:20:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:46.460 02:20:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:46.460 02:20:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:46.460 02:20:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:46.460 02:20:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:46.460 02:20:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:46.460 02:20:00 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:46.460 02:20:00 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:46.460 02:20:00 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:46.460 02:20:00 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:46.460 02:20:00 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:46.460 02:20:00 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:46.460 02:20:00 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:46.460 02:20:00 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:46.460 02:20:00 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:46.460 02:20:00 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:46.460 02:20:00 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:46.460 02:20:00 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:46.460 02:20:00 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:46.460 02:20:00 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:46.460 02:20:00 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:46.460 02:20:00 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:46.460 02:20:00 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:46.460 02:20:00 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:46.460 02:20:00 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:46.460 02:20:00 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:46.460 Cannot find device "nvmf_tgt_br" 00:22:46.460 02:20:01 -- nvmf/common.sh@154 -- # true 00:22:46.460 02:20:01 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:46.460 Cannot find device "nvmf_tgt_br2" 00:22:46.460 02:20:01 -- nvmf/common.sh@155 -- # true 00:22:46.460 02:20:01 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:46.460 02:20:01 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:46.460 Cannot find device "nvmf_tgt_br" 00:22:46.460 02:20:01 -- nvmf/common.sh@157 -- # true 00:22:46.460 02:20:01 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:46.460 Cannot find device "nvmf_tgt_br2" 00:22:46.460 02:20:01 -- nvmf/common.sh@158 -- # true 00:22:46.460 02:20:01 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:46.718 02:20:01 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:46.718 02:20:01 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:46.718 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:46.718 02:20:01 -- nvmf/common.sh@161 -- # true 00:22:46.718 02:20:01 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:46.718 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:46.718 02:20:01 -- nvmf/common.sh@162 -- # true 00:22:46.718 02:20:01 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:46.718 02:20:01 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:46.718 02:20:01 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:46.718 02:20:01 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:46.718 02:20:01 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:46.718 02:20:01 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:46.719 02:20:01 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:46.719 02:20:01 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:46.719 02:20:01 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:46.719 02:20:01 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:46.719 02:20:01 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:46.719 02:20:01 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:46.719 02:20:01 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:46.719 02:20:01 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:46.719 02:20:01 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:46.719 02:20:01 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:46.719 02:20:01 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:46.719 02:20:01 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:46.719 02:20:01 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:46.719 02:20:01 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:46.719 02:20:01 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:46.719 02:20:01 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:46.719 02:20:01 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:46.977 02:20:01 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:46.978 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:46.978 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:22:46.978 00:22:46.978 --- 10.0.0.2 ping statistics --- 00:22:46.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:46.978 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:22:46.978 02:20:01 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:46.978 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:46.978 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:22:46.978 00:22:46.978 --- 10.0.0.3 ping statistics --- 00:22:46.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:46.978 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:22:46.978 02:20:01 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:46.978 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:46.978 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:22:46.978 00:22:46.978 --- 10.0.0.1 ping statistics --- 00:22:46.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:46.978 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:22:46.978 02:20:01 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:46.978 02:20:01 -- nvmf/common.sh@421 -- # return 0 00:22:46.978 02:20:01 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:46.978 02:20:01 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:46.978 02:20:01 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:46.978 02:20:01 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:46.978 02:20:01 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:46.978 02:20:01 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:46.978 02:20:01 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:46.978 02:20:01 -- host/mdns_discovery.sh@28 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:46.978 02:20:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:46.978 02:20:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:46.978 02:20:01 -- common/autotest_common.sh@10 -- # set +x 00:22:46.978 02:20:01 -- nvmf/common.sh@469 -- # nvmfpid=85375 00:22:46.978 02:20:01 -- nvmf/common.sh@470 -- # waitforlisten 85375 00:22:46.978 02:20:01 -- common/autotest_common.sh@819 -- # '[' -z 85375 ']' 00:22:46.978 02:20:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:46.978 02:20:01 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:46.978 02:20:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:46.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:46.978 02:20:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:46.978 02:20:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:46.978 02:20:01 -- common/autotest_common.sh@10 -- # set +x 00:22:46.978 [2024-05-14 02:20:01.415745] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:46.978 [2024-05-14 02:20:01.415866] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:46.978 [2024-05-14 02:20:01.558212] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:47.237 [2024-05-14 02:20:01.628659] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:47.237 [2024-05-14 02:20:01.628859] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:47.237 [2024-05-14 02:20:01.628878] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:47.237 [2024-05-14 02:20:01.628889] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:47.237 [2024-05-14 02:20:01.628924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:48.174 02:20:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:48.174 02:20:02 -- common/autotest_common.sh@852 -- # return 0 00:22:48.174 02:20:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:48.174 02:20:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:48.174 02:20:02 -- common/autotest_common.sh@10 -- # set +x 00:22:48.174 02:20:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:48.174 02:20:02 -- host/mdns_discovery.sh@30 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:22:48.174 02:20:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:48.174 02:20:02 -- common/autotest_common.sh@10 -- # set +x 00:22:48.174 02:20:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:48.174 02:20:02 -- host/mdns_discovery.sh@31 -- # rpc_cmd framework_start_init 00:22:48.174 02:20:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:48.174 02:20:02 -- common/autotest_common.sh@10 -- # set +x 00:22:48.174 02:20:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:48.174 02:20:02 -- host/mdns_discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:48.174 02:20:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:48.174 02:20:02 -- common/autotest_common.sh@10 -- # set +x 00:22:48.174 [2024-05-14 02:20:02.562206] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:48.174 02:20:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:48.174 02:20:02 -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:22:48.174 02:20:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:48.174 02:20:02 -- common/autotest_common.sh@10 -- # set +x 00:22:48.174 [2024-05-14 02:20:02.570307] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:48.174 02:20:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:48.174 02:20:02 -- host/mdns_discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:22:48.174 02:20:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:48.174 02:20:02 -- common/autotest_common.sh@10 -- # set +x 00:22:48.174 null0 00:22:48.174 02:20:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:48.174 02:20:02 -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:22:48.174 02:20:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:48.174 02:20:02 -- common/autotest_common.sh@10 -- # set +x 00:22:48.174 null1 00:22:48.174 02:20:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:48.174 02:20:02 -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null2 1000 512 00:22:48.174 02:20:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:48.174 02:20:02 -- common/autotest_common.sh@10 -- # set +x 00:22:48.174 null2 00:22:48.174 02:20:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:48.174 02:20:02 -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null3 1000 512 00:22:48.175 02:20:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:48.175 02:20:02 -- common/autotest_common.sh@10 -- # set +x 00:22:48.175 null3 00:22:48.175 02:20:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:48.175 02:20:02 -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_wait_for_examine 00:22:48.175 02:20:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:48.175 02:20:02 -- common/autotest_common.sh@10 -- # set +x 00:22:48.175 02:20:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:48.175 02:20:02 -- host/mdns_discovery.sh@47 -- # hostpid=85425 00:22:48.175 02:20:02 -- host/mdns_discovery.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:22:48.175 02:20:02 -- host/mdns_discovery.sh@48 -- # waitforlisten 85425 /tmp/host.sock 00:22:48.175 02:20:02 -- common/autotest_common.sh@819 -- # '[' -z 85425 ']' 00:22:48.175 02:20:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:22:48.175 02:20:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:48.175 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:48.175 02:20:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:48.175 02:20:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:48.175 02:20:02 -- common/autotest_common.sh@10 -- # set +x 00:22:48.175 [2024-05-14 02:20:02.678974] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:48.175 [2024-05-14 02:20:02.679078] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85425 ] 00:22:48.434 [2024-05-14 02:20:02.820283] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:48.434 [2024-05-14 02:20:02.888360] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:48.434 [2024-05-14 02:20:02.888532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:49.370 02:20:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:49.370 02:20:03 -- common/autotest_common.sh@852 -- # return 0 00:22:49.370 02:20:03 -- host/mdns_discovery.sh@50 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:22:49.370 02:20:03 -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahi_clientpid;kill $avahipid;' EXIT 00:22:49.370 02:20:03 -- host/mdns_discovery.sh@55 -- # avahi-daemon --kill 00:22:49.370 02:20:03 -- host/mdns_discovery.sh@57 -- # avahipid=85454 00:22:49.370 02:20:03 -- host/mdns_discovery.sh@58 -- # sleep 1 00:22:49.370 02:20:03 -- host/mdns_discovery.sh@56 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:22:49.370 02:20:03 -- host/mdns_discovery.sh@56 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:22:49.370 Process 1010 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:22:49.370 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:22:49.370 Successfully dropped root privileges. 00:22:49.370 avahi-daemon 0.8 starting up. 00:22:49.370 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:22:49.370 Successfully called chroot(). 00:22:49.370 Successfully dropped remaining capabilities. 00:22:49.370 No service file found in /etc/avahi/services. 00:22:50.307 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:22:50.307 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:22:50.307 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:22:50.307 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:22:50.307 Network interface enumeration completed. 00:22:50.307 Registering new address record for fe80::b861:3dff:fef2:9f8a on nvmf_tgt_if2.*. 00:22:50.307 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:22:50.307 Registering new address record for fe80::98ea:d9ff:fed7:19e5 on nvmf_tgt_if.*. 00:22:50.307 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:22:50.307 02:20:04 -- host/mdns_discovery.sh@60 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:22:50.307 02:20:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:50.307 02:20:04 -- common/autotest_common.sh@10 -- # set +x 00:22:50.307 Server startup complete. Host name is fedora38-cloud-1705279005-2131.local. Local service cookie is 3482927824. 00:22:50.307 02:20:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:50.307 02:20:04 -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:22:50.307 02:20:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:50.307 02:20:04 -- common/autotest_common.sh@10 -- # set +x 00:22:50.307 02:20:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:50.307 02:20:04 -- host/mdns_discovery.sh@85 -- # notify_id=0 00:22:50.307 02:20:04 -- host/mdns_discovery.sh@91 -- # get_subsystem_names 00:22:50.307 02:20:04 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:50.307 02:20:04 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:22:50.307 02:20:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:50.307 02:20:04 -- host/mdns_discovery.sh@68 -- # sort 00:22:50.307 02:20:04 -- common/autotest_common.sh@10 -- # set +x 00:22:50.307 02:20:04 -- host/mdns_discovery.sh@68 -- # xargs 00:22:50.307 02:20:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:50.307 02:20:04 -- host/mdns_discovery.sh@91 -- # [[ '' == '' ]] 00:22:50.307 02:20:04 -- host/mdns_discovery.sh@92 -- # get_bdev_list 00:22:50.307 02:20:04 -- host/mdns_discovery.sh@64 -- # sort 00:22:50.307 02:20:04 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:50.307 02:20:04 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:50.307 02:20:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:50.307 02:20:04 -- common/autotest_common.sh@10 -- # set +x 00:22:50.307 02:20:04 -- host/mdns_discovery.sh@64 -- # xargs 00:22:50.565 02:20:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:50.565 02:20:04 -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:22:50.565 02:20:04 -- host/mdns_discovery.sh@94 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:22:50.565 02:20:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:50.565 02:20:04 -- common/autotest_common.sh@10 -- # set +x 00:22:50.565 02:20:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:50.565 02:20:04 -- host/mdns_discovery.sh@95 -- # get_subsystem_names 00:22:50.565 02:20:04 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:50.565 02:20:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:50.565 02:20:04 -- common/autotest_common.sh@10 -- # set +x 00:22:50.565 02:20:04 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:22:50.565 02:20:04 -- host/mdns_discovery.sh@68 -- # sort 00:22:50.565 02:20:04 -- host/mdns_discovery.sh@68 -- # xargs 00:22:50.565 02:20:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:50.565 02:20:05 -- host/mdns_discovery.sh@95 -- # [[ '' == '' ]] 00:22:50.565 02:20:05 -- host/mdns_discovery.sh@96 -- # get_bdev_list 00:22:50.565 02:20:05 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:50.565 02:20:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:50.565 02:20:05 -- host/mdns_discovery.sh@64 -- # sort 00:22:50.565 02:20:05 -- common/autotest_common.sh@10 -- # set +x 00:22:50.565 02:20:05 -- host/mdns_discovery.sh@64 -- # xargs 00:22:50.565 02:20:05 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:50.565 02:20:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:50.565 02:20:05 -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:22:50.565 02:20:05 -- host/mdns_discovery.sh@98 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:22:50.565 02:20:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:50.565 02:20:05 -- common/autotest_common.sh@10 -- # set +x 00:22:50.565 02:20:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:50.565 02:20:05 -- host/mdns_discovery.sh@99 -- # get_subsystem_names 00:22:50.565 02:20:05 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:50.565 02:20:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:50.565 02:20:05 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:22:50.565 02:20:05 -- common/autotest_common.sh@10 -- # set +x 00:22:50.565 02:20:05 -- host/mdns_discovery.sh@68 -- # xargs 00:22:50.565 02:20:05 -- host/mdns_discovery.sh@68 -- # sort 00:22:50.565 02:20:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:50.565 [2024-05-14 02:20:05.121188] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:22:50.565 02:20:05 -- host/mdns_discovery.sh@99 -- # [[ '' == '' ]] 00:22:50.565 02:20:05 -- host/mdns_discovery.sh@100 -- # get_bdev_list 00:22:50.565 02:20:05 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:50.565 02:20:05 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:50.566 02:20:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:50.566 02:20:05 -- host/mdns_discovery.sh@64 -- # xargs 00:22:50.566 02:20:05 -- host/mdns_discovery.sh@64 -- # sort 00:22:50.566 02:20:05 -- common/autotest_common.sh@10 -- # set +x 00:22:50.566 02:20:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:50.823 02:20:05 -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:22:50.823 02:20:05 -- host/mdns_discovery.sh@104 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:50.823 02:20:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:50.823 02:20:05 -- common/autotest_common.sh@10 -- # set +x 00:22:50.823 [2024-05-14 02:20:05.199719] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:50.823 02:20:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:50.823 02:20:05 -- host/mdns_discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:22:50.823 02:20:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:50.823 02:20:05 -- common/autotest_common.sh@10 -- # set +x 00:22:50.823 02:20:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:50.823 02:20:05 -- host/mdns_discovery.sh@111 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:22:50.823 02:20:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:50.823 02:20:05 -- common/autotest_common.sh@10 -- # set +x 00:22:50.823 02:20:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:50.823 02:20:05 -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:22:50.823 02:20:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:50.823 02:20:05 -- common/autotest_common.sh@10 -- # set +x 00:22:50.823 02:20:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:50.823 02:20:05 -- host/mdns_discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:22:50.823 02:20:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:50.823 02:20:05 -- common/autotest_common.sh@10 -- # set +x 00:22:50.823 02:20:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:50.823 02:20:05 -- host/mdns_discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:22:50.823 02:20:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:50.823 02:20:05 -- common/autotest_common.sh@10 -- # set +x 00:22:50.823 [2024-05-14 02:20:05.239698] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:22:50.823 02:20:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:50.823 02:20:05 -- host/mdns_discovery.sh@120 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:22:50.823 02:20:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:50.823 02:20:05 -- common/autotest_common.sh@10 -- # set +x 00:22:50.823 [2024-05-14 02:20:05.247651] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:50.823 02:20:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:50.823 02:20:05 -- host/mdns_discovery.sh@124 -- # avahi_clientpid=85515 00:22:50.823 02:20:05 -- host/mdns_discovery.sh@125 -- # sleep 5 00:22:50.823 02:20:05 -- host/mdns_discovery.sh@123 -- # ip netns exec nvmf_tgt_ns_spdk /usr/bin/avahi-publish --domain=local --service CDC _nvme-disc._tcp 8009 NQN=nqn.2014-08.org.nvmexpress.discovery p=tcp 00:22:51.759 [2024-05-14 02:20:06.021203] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:22:51.759 Established under name 'CDC' 00:22:52.018 [2024-05-14 02:20:06.421227] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:22:52.018 [2024-05-14 02:20:06.421254] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.3) 00:22:52.018 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:22:52.018 cookie is 0 00:22:52.018 is_local: 1 00:22:52.018 our_own: 0 00:22:52.018 wide_area: 0 00:22:52.018 multicast: 1 00:22:52.018 cached: 1 00:22:52.018 [2024-05-14 02:20:06.521226] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:22:52.018 [2024-05-14 02:20:06.521254] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.2) 00:22:52.018 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:22:52.018 cookie is 0 00:22:52.018 is_local: 1 00:22:52.018 our_own: 0 00:22:52.018 wide_area: 0 00:22:52.018 multicast: 1 00:22:52.018 cached: 1 00:22:52.985 [2024-05-14 02:20:07.427455] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:22:52.985 [2024-05-14 02:20:07.427486] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:22:52.985 [2024-05-14 02:20:07.427506] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:52.985 [2024-05-14 02:20:07.513805] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:22:52.985 [2024-05-14 02:20:07.527282] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:52.985 [2024-05-14 02:20:07.527304] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:52.985 [2024-05-14 02:20:07.527355] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:53.243 [2024-05-14 02:20:07.574474] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:22:53.243 [2024-05-14 02:20:07.574505] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:22:53.243 [2024-05-14 02:20:07.615716] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:22:53.243 [2024-05-14 02:20:07.676854] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:22:53.243 [2024-05-14 02:20:07.676884] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:55.774 02:20:10 -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:22:55.774 02:20:10 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:22:55.774 02:20:10 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:22:55.774 02:20:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.774 02:20:10 -- common/autotest_common.sh@10 -- # set +x 00:22:55.774 02:20:10 -- host/mdns_discovery.sh@80 -- # sort 00:22:55.774 02:20:10 -- host/mdns_discovery.sh@80 -- # xargs 00:22:55.774 02:20:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.774 02:20:10 -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:22:55.774 02:20:10 -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:22:55.774 02:20:10 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:55.774 02:20:10 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:22:55.774 02:20:10 -- host/mdns_discovery.sh@76 -- # sort 00:22:55.774 02:20:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.774 02:20:10 -- common/autotest_common.sh@10 -- # set +x 00:22:55.774 02:20:10 -- host/mdns_discovery.sh@76 -- # xargs 00:22:55.774 02:20:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:56.032 02:20:10 -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:22:56.032 02:20:10 -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:22:56.032 02:20:10 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:56.032 02:20:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:56.032 02:20:10 -- common/autotest_common.sh@10 -- # set +x 00:22:56.032 02:20:10 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:22:56.032 02:20:10 -- host/mdns_discovery.sh@68 -- # sort 00:22:56.032 02:20:10 -- host/mdns_discovery.sh@68 -- # xargs 00:22:56.032 02:20:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:56.032 02:20:10 -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:22:56.032 02:20:10 -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:22:56.032 02:20:10 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:56.032 02:20:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:56.032 02:20:10 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:56.032 02:20:10 -- common/autotest_common.sh@10 -- # set +x 00:22:56.032 02:20:10 -- host/mdns_discovery.sh@64 -- # sort 00:22:56.032 02:20:10 -- host/mdns_discovery.sh@64 -- # xargs 00:22:56.032 02:20:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:56.032 02:20:10 -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:22:56.032 02:20:10 -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:22:56.032 02:20:10 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:22:56.032 02:20:10 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:56.032 02:20:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:56.032 02:20:10 -- common/autotest_common.sh@10 -- # set +x 00:22:56.032 02:20:10 -- host/mdns_discovery.sh@72 -- # sort -n 00:22:56.032 02:20:10 -- host/mdns_discovery.sh@72 -- # xargs 00:22:56.032 02:20:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:56.032 02:20:10 -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:22:56.032 02:20:10 -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:22:56.032 02:20:10 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:22:56.032 02:20:10 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:56.032 02:20:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:56.032 02:20:10 -- host/mdns_discovery.sh@72 -- # sort -n 00:22:56.032 02:20:10 -- common/autotest_common.sh@10 -- # set +x 00:22:56.032 02:20:10 -- host/mdns_discovery.sh@72 -- # xargs 00:22:56.032 02:20:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:56.032 02:20:10 -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:22:56.032 02:20:10 -- host/mdns_discovery.sh@133 -- # get_notification_count 00:22:56.032 02:20:10 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:56.033 02:20:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:56.033 02:20:10 -- common/autotest_common.sh@10 -- # set +x 00:22:56.033 02:20:10 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:22:56.033 02:20:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:56.290 02:20:10 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:22:56.290 02:20:10 -- host/mdns_discovery.sh@88 -- # notify_id=2 00:22:56.290 02:20:10 -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:22:56.290 02:20:10 -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:22:56.290 02:20:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:56.290 02:20:10 -- common/autotest_common.sh@10 -- # set +x 00:22:56.290 02:20:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:56.290 02:20:10 -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:22:56.290 02:20:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:56.290 02:20:10 -- common/autotest_common.sh@10 -- # set +x 00:22:56.290 02:20:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:56.290 02:20:10 -- host/mdns_discovery.sh@139 -- # sleep 1 00:22:57.225 02:20:11 -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:22:57.225 02:20:11 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:57.225 02:20:11 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:57.225 02:20:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:57.225 02:20:11 -- host/mdns_discovery.sh@64 -- # sort 00:22:57.225 02:20:11 -- common/autotest_common.sh@10 -- # set +x 00:22:57.225 02:20:11 -- host/mdns_discovery.sh@64 -- # xargs 00:22:57.225 02:20:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:57.225 02:20:11 -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:22:57.225 02:20:11 -- host/mdns_discovery.sh@142 -- # get_notification_count 00:22:57.225 02:20:11 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:57.225 02:20:11 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:22:57.225 02:20:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:57.225 02:20:11 -- common/autotest_common.sh@10 -- # set +x 00:22:57.225 02:20:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:57.225 02:20:11 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:22:57.225 02:20:11 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:22:57.225 02:20:11 -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:22:57.225 02:20:11 -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:22:57.225 02:20:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:57.225 02:20:11 -- common/autotest_common.sh@10 -- # set +x 00:22:57.225 [2024-05-14 02:20:11.792973] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:57.225 [2024-05-14 02:20:11.793803] bdev_nvme.c:6735:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:57.225 [2024-05-14 02:20:11.793862] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:57.225 [2024-05-14 02:20:11.793929] bdev_nvme.c:6735:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:22:57.225 [2024-05-14 02:20:11.793993] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:57.225 02:20:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:57.225 02:20:11 -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:22:57.225 02:20:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:57.225 02:20:11 -- common/autotest_common.sh@10 -- # set +x 00:22:57.225 [2024-05-14 02:20:11.800991] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:22:57.225 [2024-05-14 02:20:11.801805] bdev_nvme.c:6735:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:57.225 [2024-05-14 02:20:11.801882] bdev_nvme.c:6735:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:22:57.225 02:20:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:57.225 02:20:11 -- host/mdns_discovery.sh@149 -- # sleep 1 00:22:57.484 [2024-05-14 02:20:11.934079] bdev_nvme.c:6677:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:22:57.484 [2024-05-14 02:20:11.934264] bdev_nvme.c:6677:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:22:57.484 [2024-05-14 02:20:11.991459] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:22:57.484 [2024-05-14 02:20:11.991502] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:57.484 [2024-05-14 02:20:11.991509] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:57.484 [2024-05-14 02:20:11.991527] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:57.484 [2024-05-14 02:20:11.991614] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:22:57.484 [2024-05-14 02:20:11.991623] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:22:57.484 [2024-05-14 02:20:11.991643] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:22:57.484 [2024-05-14 02:20:11.991672] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:57.484 [2024-05-14 02:20:12.037268] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:57.484 [2024-05-14 02:20:12.037291] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:57.484 [2024-05-14 02:20:12.037330] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:22:57.484 [2024-05-14 02:20:12.037339] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:22:58.421 02:20:12 -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:22:58.421 02:20:12 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:58.421 02:20:12 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:22:58.421 02:20:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:58.421 02:20:12 -- common/autotest_common.sh@10 -- # set +x 00:22:58.421 02:20:12 -- host/mdns_discovery.sh@68 -- # sort 00:22:58.421 02:20:12 -- host/mdns_discovery.sh@68 -- # xargs 00:22:58.421 02:20:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:58.421 02:20:12 -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:22:58.421 02:20:12 -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:22:58.422 02:20:12 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:58.422 02:20:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:58.422 02:20:12 -- common/autotest_common.sh@10 -- # set +x 00:22:58.422 02:20:12 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:58.422 02:20:12 -- host/mdns_discovery.sh@64 -- # sort 00:22:58.422 02:20:12 -- host/mdns_discovery.sh@64 -- # xargs 00:22:58.422 02:20:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:58.422 02:20:12 -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:22:58.422 02:20:12 -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:22:58.422 02:20:12 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:22:58.422 02:20:12 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:58.422 02:20:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:58.422 02:20:12 -- common/autotest_common.sh@10 -- # set +x 00:22:58.422 02:20:12 -- host/mdns_discovery.sh@72 -- # sort -n 00:22:58.422 02:20:12 -- host/mdns_discovery.sh@72 -- # xargs 00:22:58.422 02:20:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:58.683 02:20:13 -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:22:58.683 02:20:13 -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:22:58.683 02:20:13 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:58.683 02:20:13 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:22:58.683 02:20:13 -- host/mdns_discovery.sh@72 -- # sort -n 00:22:58.683 02:20:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:58.683 02:20:13 -- common/autotest_common.sh@10 -- # set +x 00:22:58.683 02:20:13 -- host/mdns_discovery.sh@72 -- # xargs 00:22:58.683 02:20:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:58.683 02:20:13 -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:22:58.683 02:20:13 -- host/mdns_discovery.sh@155 -- # get_notification_count 00:22:58.683 02:20:13 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:22:58.683 02:20:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:58.683 02:20:13 -- common/autotest_common.sh@10 -- # set +x 00:22:58.683 02:20:13 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:22:58.683 02:20:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:58.683 02:20:13 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:22:58.683 02:20:13 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:22:58.683 02:20:13 -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:22:58.683 02:20:13 -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:58.683 02:20:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:58.683 02:20:13 -- common/autotest_common.sh@10 -- # set +x 00:22:58.683 [2024-05-14 02:20:13.139215] bdev_nvme.c:6735:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:58.683 [2024-05-14 02:20:13.139268] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:58.683 [2024-05-14 02:20:13.139319] bdev_nvme.c:6735:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:22:58.683 [2024-05-14 02:20:13.139332] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:58.683 02:20:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:58.683 02:20:13 -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:22:58.683 02:20:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:58.683 02:20:13 -- common/autotest_common.sh@10 -- # set +x 00:22:58.683 [2024-05-14 02:20:13.146815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.683 [2024-05-14 02:20:13.146878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.683 [2024-05-14 02:20:13.146891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.683 [2024-05-14 02:20:13.146900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.683 [2024-05-14 02:20:13.146910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.683 [2024-05-14 02:20:13.146919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.683 [2024-05-14 02:20:13.146928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.683 [2024-05-14 02:20:13.146937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.683 [2024-05-14 02:20:13.146946] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe05e0 is same with the state(5) to be set 00:22:58.683 [2024-05-14 02:20:13.147219] bdev_nvme.c:6735:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:58.683 [2024-05-14 02:20:13.147283] bdev_nvme.c:6735:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:22:58.683 02:20:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:58.683 02:20:13 -- host/mdns_discovery.sh@162 -- # sleep 1 00:22:58.683 [2024-05-14 02:20:13.153101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.683 [2024-05-14 02:20:13.153131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.683 [2024-05-14 02:20:13.153144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.683 [2024-05-14 02:20:13.153153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.683 [2024-05-14 02:20:13.153164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.683 [2024-05-14 02:20:13.153173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.683 [2024-05-14 02:20:13.153183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.683 [2024-05-14 02:20:13.153192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.683 [2024-05-14 02:20:13.153201] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f77480 is same with the state(5) to be set 00:22:58.683 [2024-05-14 02:20:13.156756] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe05e0 (9): Bad file descriptor 00:22:58.683 [2024-05-14 02:20:13.163067] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f77480 (9): Bad file descriptor 00:22:58.683 [2024-05-14 02:20:13.166839] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:58.683 [2024-05-14 02:20:13.167037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.683 [2024-05-14 02:20:13.167086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.683 [2024-05-14 02:20:13.167102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe05e0 with addr=10.0.0.2, port=4420 00:22:58.683 [2024-05-14 02:20:13.167113] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe05e0 is same with the state(5) to be set 00:22:58.683 [2024-05-14 02:20:13.167130] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe05e0 (9): Bad file descriptor 00:22:58.683 [2024-05-14 02:20:13.167145] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:58.683 [2024-05-14 02:20:13.167160] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:58.683 [2024-05-14 02:20:13.167170] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:58.684 [2024-05-14 02:20:13.167186] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.684 [2024-05-14 02:20:13.173103] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:58.684 [2024-05-14 02:20:13.173229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.684 [2024-05-14 02:20:13.173272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.684 [2024-05-14 02:20:13.173287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f77480 with addr=10.0.0.3, port=4420 00:22:58.684 [2024-05-14 02:20:13.173296] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f77480 is same with the state(5) to be set 00:22:58.684 [2024-05-14 02:20:13.173311] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f77480 (9): Bad file descriptor 00:22:58.684 [2024-05-14 02:20:13.173356] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:58.684 [2024-05-14 02:20:13.173365] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:58.684 [2024-05-14 02:20:13.173406] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:58.684 [2024-05-14 02:20:13.173437] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.684 [2024-05-14 02:20:13.177000] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:58.684 [2024-05-14 02:20:13.177150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.684 [2024-05-14 02:20:13.177207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.684 [2024-05-14 02:20:13.177221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe05e0 with addr=10.0.0.2, port=4420 00:22:58.684 [2024-05-14 02:20:13.177231] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe05e0 is same with the state(5) to be set 00:22:58.684 [2024-05-14 02:20:13.177246] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe05e0 (9): Bad file descriptor 00:22:58.684 [2024-05-14 02:20:13.177259] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:58.684 [2024-05-14 02:20:13.177267] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:58.684 [2024-05-14 02:20:13.177275] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:58.684 [2024-05-14 02:20:13.177289] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.684 [2024-05-14 02:20:13.183199] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:58.684 [2024-05-14 02:20:13.183306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.684 [2024-05-14 02:20:13.183382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.684 [2024-05-14 02:20:13.183397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f77480 with addr=10.0.0.3, port=4420 00:22:58.684 [2024-05-14 02:20:13.183424] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f77480 is same with the state(5) to be set 00:22:58.684 [2024-05-14 02:20:13.183454] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f77480 (9): Bad file descriptor 00:22:58.684 [2024-05-14 02:20:13.183506] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:58.684 [2024-05-14 02:20:13.183549] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:58.684 [2024-05-14 02:20:13.183574] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:58.684 [2024-05-14 02:20:13.183589] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.684 [2024-05-14 02:20:13.187092] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:58.684 [2024-05-14 02:20:13.187160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.684 [2024-05-14 02:20:13.187216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.684 [2024-05-14 02:20:13.187230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe05e0 with addr=10.0.0.2, port=4420 00:22:58.684 [2024-05-14 02:20:13.187240] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe05e0 is same with the state(5) to be set 00:22:58.684 [2024-05-14 02:20:13.187270] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe05e0 (9): Bad file descriptor 00:22:58.684 [2024-05-14 02:20:13.187283] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:58.684 [2024-05-14 02:20:13.187316] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:58.684 [2024-05-14 02:20:13.187324] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:58.684 [2024-05-14 02:20:13.187338] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.684 [2024-05-14 02:20:13.193280] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:58.684 [2024-05-14 02:20:13.193424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.684 [2024-05-14 02:20:13.193483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.684 [2024-05-14 02:20:13.193498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f77480 with addr=10.0.0.3, port=4420 00:22:58.684 [2024-05-14 02:20:13.193508] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f77480 is same with the state(5) to be set 00:22:58.684 [2024-05-14 02:20:13.193523] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f77480 (9): Bad file descriptor 00:22:58.684 [2024-05-14 02:20:13.193610] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:58.684 [2024-05-14 02:20:13.193623] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:58.684 [2024-05-14 02:20:13.193632] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:58.684 [2024-05-14 02:20:13.193647] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.684 [2024-05-14 02:20:13.197137] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:58.684 [2024-05-14 02:20:13.197245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.684 [2024-05-14 02:20:13.197324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.684 [2024-05-14 02:20:13.197339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe05e0 with addr=10.0.0.2, port=4420 00:22:58.684 [2024-05-14 02:20:13.197364] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe05e0 is same with the state(5) to be set 00:22:58.684 [2024-05-14 02:20:13.197379] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe05e0 (9): Bad file descriptor 00:22:58.684 [2024-05-14 02:20:13.197392] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:58.684 [2024-05-14 02:20:13.197400] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:58.684 [2024-05-14 02:20:13.197409] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:58.684 [2024-05-14 02:20:13.197423] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.684 [2024-05-14 02:20:13.203361] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:58.684 [2024-05-14 02:20:13.203513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.684 [2024-05-14 02:20:13.203572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.684 [2024-05-14 02:20:13.203587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f77480 with addr=10.0.0.3, port=4420 00:22:58.684 [2024-05-14 02:20:13.203597] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f77480 is same with the state(5) to be set 00:22:58.684 [2024-05-14 02:20:13.203612] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f77480 (9): Bad file descriptor 00:22:58.684 [2024-05-14 02:20:13.203667] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:58.684 [2024-05-14 02:20:13.203677] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:58.684 [2024-05-14 02:20:13.203687] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:58.684 [2024-05-14 02:20:13.203732] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.684 [2024-05-14 02:20:13.207244] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:58.684 [2024-05-14 02:20:13.207334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.684 [2024-05-14 02:20:13.207377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.684 [2024-05-14 02:20:13.207391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe05e0 with addr=10.0.0.2, port=4420 00:22:58.684 [2024-05-14 02:20:13.207401] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe05e0 is same with the state(5) to be set 00:22:58.684 [2024-05-14 02:20:13.207432] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe05e0 (9): Bad file descriptor 00:22:58.684 [2024-05-14 02:20:13.207461] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:58.684 [2024-05-14 02:20:13.207470] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:58.684 [2024-05-14 02:20:13.207478] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:58.684 [2024-05-14 02:20:13.207492] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.684 [2024-05-14 02:20:13.213440] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:58.684 [2024-05-14 02:20:13.213514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.684 [2024-05-14 02:20:13.213557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.684 [2024-05-14 02:20:13.213572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f77480 with addr=10.0.0.3, port=4420 00:22:58.684 [2024-05-14 02:20:13.213581] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f77480 is same with the state(5) to be set 00:22:58.684 [2024-05-14 02:20:13.213596] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f77480 (9): Bad file descriptor 00:22:58.684 [2024-05-14 02:20:13.213610] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:58.684 [2024-05-14 02:20:13.213618] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:58.684 [2024-05-14 02:20:13.213627] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:58.684 [2024-05-14 02:20:13.213640] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.684 [2024-05-14 02:20:13.217307] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:58.684 [2024-05-14 02:20:13.217427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.684 [2024-05-14 02:20:13.217502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.684 [2024-05-14 02:20:13.217516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe05e0 with addr=10.0.0.2, port=4420 00:22:58.685 [2024-05-14 02:20:13.217526] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe05e0 is same with the state(5) to be set 00:22:58.685 [2024-05-14 02:20:13.217541] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe05e0 (9): Bad file descriptor 00:22:58.685 [2024-05-14 02:20:13.217554] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:58.685 [2024-05-14 02:20:13.217562] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:58.685 [2024-05-14 02:20:13.217571] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:58.685 [2024-05-14 02:20:13.217585] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.685 [2024-05-14 02:20:13.223501] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:58.685 [2024-05-14 02:20:13.223592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.685 [2024-05-14 02:20:13.223634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.685 [2024-05-14 02:20:13.223665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f77480 with addr=10.0.0.3, port=4420 00:22:58.685 [2024-05-14 02:20:13.223689] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f77480 is same with the state(5) to be set 00:22:58.685 [2024-05-14 02:20:13.223705] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f77480 (9): Bad file descriptor 00:22:58.685 [2024-05-14 02:20:13.223789] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:58.685 [2024-05-14 02:20:13.223815] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:58.685 [2024-05-14 02:20:13.223840] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:58.685 [2024-05-14 02:20:13.223855] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.685 [2024-05-14 02:20:13.227362] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:58.685 [2024-05-14 02:20:13.227468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.685 [2024-05-14 02:20:13.227526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.685 [2024-05-14 02:20:13.227555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe05e0 with addr=10.0.0.2, port=4420 00:22:58.685 [2024-05-14 02:20:13.227565] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe05e0 is same with the state(5) to be set 00:22:58.685 [2024-05-14 02:20:13.227580] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe05e0 (9): Bad file descriptor 00:22:58.685 [2024-05-14 02:20:13.227609] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:58.685 [2024-05-14 02:20:13.227617] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:58.685 [2024-05-14 02:20:13.227626] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:58.685 [2024-05-14 02:20:13.227640] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.685 [2024-05-14 02:20:13.233564] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:58.685 [2024-05-14 02:20:13.233655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.685 [2024-05-14 02:20:13.233730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.685 [2024-05-14 02:20:13.233745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f77480 with addr=10.0.0.3, port=4420 00:22:58.685 [2024-05-14 02:20:13.233754] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f77480 is same with the state(5) to be set 00:22:58.685 [2024-05-14 02:20:13.233786] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f77480 (9): Bad file descriptor 00:22:58.685 [2024-05-14 02:20:13.233831] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:58.685 [2024-05-14 02:20:13.233886] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:58.685 [2024-05-14 02:20:13.233896] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:58.685 [2024-05-14 02:20:13.233912] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.685 [2024-05-14 02:20:13.237429] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:58.685 [2024-05-14 02:20:13.237527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.685 [2024-05-14 02:20:13.237604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.685 [2024-05-14 02:20:13.237634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe05e0 with addr=10.0.0.2, port=4420 00:22:58.685 [2024-05-14 02:20:13.237644] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe05e0 is same with the state(5) to be set 00:22:58.685 [2024-05-14 02:20:13.237660] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe05e0 (9): Bad file descriptor 00:22:58.685 [2024-05-14 02:20:13.237673] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:58.685 [2024-05-14 02:20:13.237681] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:58.685 [2024-05-14 02:20:13.237690] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:58.685 [2024-05-14 02:20:13.237705] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.685 [2024-05-14 02:20:13.243628] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:58.685 [2024-05-14 02:20:13.243815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.685 [2024-05-14 02:20:13.243906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.685 [2024-05-14 02:20:13.243924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f77480 with addr=10.0.0.3, port=4420 00:22:58.685 [2024-05-14 02:20:13.243934] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f77480 is same with the state(5) to be set 00:22:58.685 [2024-05-14 02:20:13.243960] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f77480 (9): Bad file descriptor 00:22:58.685 [2024-05-14 02:20:13.243991] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:58.685 [2024-05-14 02:20:13.244000] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:58.685 [2024-05-14 02:20:13.244009] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:58.685 [2024-05-14 02:20:13.244025] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.685 [2024-05-14 02:20:13.247495] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:58.685 [2024-05-14 02:20:13.247598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.685 [2024-05-14 02:20:13.247639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.685 [2024-05-14 02:20:13.247670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe05e0 with addr=10.0.0.2, port=4420 00:22:58.685 [2024-05-14 02:20:13.247680] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe05e0 is same with the state(5) to be set 00:22:58.685 [2024-05-14 02:20:13.247694] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe05e0 (9): Bad file descriptor 00:22:58.685 [2024-05-14 02:20:13.247707] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:58.685 [2024-05-14 02:20:13.247715] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:58.685 [2024-05-14 02:20:13.247740] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:58.685 [2024-05-14 02:20:13.247754] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.685 [2024-05-14 02:20:13.253740] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:58.685 [2024-05-14 02:20:13.253868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.685 [2024-05-14 02:20:13.253912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.685 [2024-05-14 02:20:13.253926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f77480 with addr=10.0.0.3, port=4420 00:22:58.685 [2024-05-14 02:20:13.253962] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f77480 is same with the state(5) to be set 00:22:58.685 [2024-05-14 02:20:13.253995] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f77480 (9): Bad file descriptor 00:22:58.685 [2024-05-14 02:20:13.254031] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:58.685 [2024-05-14 02:20:13.254047] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:58.685 [2024-05-14 02:20:13.254061] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:58.685 [2024-05-14 02:20:13.254081] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.685 [2024-05-14 02:20:13.257570] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:58.685 [2024-05-14 02:20:13.257662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.685 [2024-05-14 02:20:13.257704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.685 [2024-05-14 02:20:13.257736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe05e0 with addr=10.0.0.2, port=4420 00:22:58.685 [2024-05-14 02:20:13.257745] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe05e0 is same with the state(5) to be set 00:22:58.685 [2024-05-14 02:20:13.257775] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe05e0 (9): Bad file descriptor 00:22:58.685 [2024-05-14 02:20:13.257803] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:58.685 [2024-05-14 02:20:13.257811] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:58.685 [2024-05-14 02:20:13.257836] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:58.685 [2024-05-14 02:20:13.257875] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.685 [2024-05-14 02:20:13.263823] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:58.685 [2024-05-14 02:20:13.263940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.685 [2024-05-14 02:20:13.263986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.685 [2024-05-14 02:20:13.264001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f77480 with addr=10.0.0.3, port=4420 00:22:58.685 [2024-05-14 02:20:13.264011] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f77480 is same with the state(5) to be set 00:22:58.685 [2024-05-14 02:20:13.264042] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f77480 (9): Bad file descriptor 00:22:58.685 [2024-05-14 02:20:13.264101] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:58.685 [2024-05-14 02:20:13.264110] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:58.686 [2024-05-14 02:20:13.264119] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:58.686 [2024-05-14 02:20:13.264133] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.686 [2024-05-14 02:20:13.267649] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:58.686 [2024-05-14 02:20:13.267724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.686 [2024-05-14 02:20:13.267779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.686 [2024-05-14 02:20:13.267808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe05e0 with addr=10.0.0.2, port=4420 00:22:58.686 [2024-05-14 02:20:13.267820] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe05e0 is same with the state(5) to be set 00:22:58.686 [2024-05-14 02:20:13.267836] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe05e0 (9): Bad file descriptor 00:22:58.686 [2024-05-14 02:20:13.267862] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:58.686 [2024-05-14 02:20:13.267873] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:58.686 [2024-05-14 02:20:13.267882] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:58.686 [2024-05-14 02:20:13.267896] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.945 [2024-05-14 02:20:13.273910] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:58.945 [2024-05-14 02:20:13.274034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.945 [2024-05-14 02:20:13.274079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.945 [2024-05-14 02:20:13.274094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f77480 with addr=10.0.0.3, port=4420 00:22:58.945 [2024-05-14 02:20:13.274105] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f77480 is same with the state(5) to be set 00:22:58.945 [2024-05-14 02:20:13.274120] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f77480 (9): Bad file descriptor 00:22:58.945 [2024-05-14 02:20:13.274151] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:58.945 [2024-05-14 02:20:13.274161] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:58.945 [2024-05-14 02:20:13.274170] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:58.945 [2024-05-14 02:20:13.274184] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.945 [2024-05-14 02:20:13.277698] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:58.945 [2024-05-14 02:20:13.277845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.945 [2024-05-14 02:20:13.277902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.945 [2024-05-14 02:20:13.277932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe05e0 with addr=10.0.0.2, port=4420 00:22:58.946 [2024-05-14 02:20:13.277969] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe05e0 is same with the state(5) to be set 00:22:58.946 [2024-05-14 02:20:13.277985] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe05e0 (9): Bad file descriptor 00:22:58.946 [2024-05-14 02:20:13.277999] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:58.946 [2024-05-14 02:20:13.278008] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:58.946 [2024-05-14 02:20:13.278017] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:58.946 [2024-05-14 02:20:13.278031] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.946 [2024-05-14 02:20:13.279150] bdev_nvme.c:6540:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:22:58.946 [2024-05-14 02:20:13.279188] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:58.946 [2024-05-14 02:20:13.279209] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:58.946 [2024-05-14 02:20:13.279244] bdev_nvme.c:6540:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:22:58.946 [2024-05-14 02:20:13.279260] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:22:58.946 [2024-05-14 02:20:13.279273] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:58.946 [2024-05-14 02:20:13.365355] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:58.946 [2024-05-14 02:20:13.365444] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:22:59.879 02:20:14 -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:22:59.879 02:20:14 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:59.879 02:20:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:59.879 02:20:14 -- common/autotest_common.sh@10 -- # set +x 00:22:59.879 02:20:14 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:22:59.879 02:20:14 -- host/mdns_discovery.sh@68 -- # sort 00:22:59.879 02:20:14 -- host/mdns_discovery.sh@68 -- # xargs 00:22:59.879 02:20:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:59.879 02:20:14 -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:22:59.879 02:20:14 -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:22:59.879 02:20:14 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:59.879 02:20:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:59.879 02:20:14 -- common/autotest_common.sh@10 -- # set +x 00:22:59.879 02:20:14 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:59.879 02:20:14 -- host/mdns_discovery.sh@64 -- # sort 00:22:59.879 02:20:14 -- host/mdns_discovery.sh@64 -- # xargs 00:22:59.879 02:20:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:59.879 02:20:14 -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:22:59.879 02:20:14 -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:22:59.879 02:20:14 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:22:59.879 02:20:14 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:59.879 02:20:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:59.879 02:20:14 -- common/autotest_common.sh@10 -- # set +x 00:22:59.879 02:20:14 -- host/mdns_discovery.sh@72 -- # sort -n 00:22:59.879 02:20:14 -- host/mdns_discovery.sh@72 -- # xargs 00:22:59.879 02:20:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:59.879 02:20:14 -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:22:59.879 02:20:14 -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:22:59.879 02:20:14 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:22:59.879 02:20:14 -- host/mdns_discovery.sh@72 -- # xargs 00:22:59.879 02:20:14 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:59.879 02:20:14 -- host/mdns_discovery.sh@72 -- # sort -n 00:22:59.879 02:20:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:59.879 02:20:14 -- common/autotest_common.sh@10 -- # set +x 00:22:59.879 02:20:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:59.879 02:20:14 -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:22:59.879 02:20:14 -- host/mdns_discovery.sh@168 -- # get_notification_count 00:22:59.879 02:20:14 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:22:59.879 02:20:14 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:22:59.880 02:20:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:59.880 02:20:14 -- common/autotest_common.sh@10 -- # set +x 00:22:59.880 02:20:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:59.880 02:20:14 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:22:59.880 02:20:14 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:22:59.880 02:20:14 -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:22:59.880 02:20:14 -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:22:59.880 02:20:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:59.880 02:20:14 -- common/autotest_common.sh@10 -- # set +x 00:22:59.880 02:20:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:59.880 02:20:14 -- host/mdns_discovery.sh@172 -- # sleep 1 00:23:00.138 [2024-05-14 02:20:14.521831] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:23:01.074 02:20:15 -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:23:01.074 02:20:15 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:23:01.074 02:20:15 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:23:01.074 02:20:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:01.074 02:20:15 -- host/mdns_discovery.sh@80 -- # sort 00:23:01.074 02:20:15 -- common/autotest_common.sh@10 -- # set +x 00:23:01.074 02:20:15 -- host/mdns_discovery.sh@80 -- # xargs 00:23:01.074 02:20:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:01.074 02:20:15 -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:23:01.074 02:20:15 -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:23:01.074 02:20:15 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:01.074 02:20:15 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:01.074 02:20:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:01.074 02:20:15 -- host/mdns_discovery.sh@68 -- # sort 00:23:01.074 02:20:15 -- common/autotest_common.sh@10 -- # set +x 00:23:01.074 02:20:15 -- host/mdns_discovery.sh@68 -- # xargs 00:23:01.074 02:20:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:01.074 02:20:15 -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:23:01.074 02:20:15 -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:23:01.074 02:20:15 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:01.074 02:20:15 -- host/mdns_discovery.sh@64 -- # sort 00:23:01.074 02:20:15 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:01.074 02:20:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:01.074 02:20:15 -- common/autotest_common.sh@10 -- # set +x 00:23:01.074 02:20:15 -- host/mdns_discovery.sh@64 -- # xargs 00:23:01.074 02:20:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:01.074 02:20:15 -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:23:01.074 02:20:15 -- host/mdns_discovery.sh@177 -- # get_notification_count 00:23:01.074 02:20:15 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:23:01.074 02:20:15 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:01.074 02:20:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:01.074 02:20:15 -- common/autotest_common.sh@10 -- # set +x 00:23:01.074 02:20:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:01.333 02:20:15 -- host/mdns_discovery.sh@87 -- # notification_count=4 00:23:01.333 02:20:15 -- host/mdns_discovery.sh@88 -- # notify_id=8 00:23:01.333 02:20:15 -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:23:01.333 02:20:15 -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:01.333 02:20:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:01.333 02:20:15 -- common/autotest_common.sh@10 -- # set +x 00:23:01.333 02:20:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:01.333 02:20:15 -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:01.333 02:20:15 -- common/autotest_common.sh@640 -- # local es=0 00:23:01.333 02:20:15 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:01.333 02:20:15 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:23:01.333 02:20:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:01.333 02:20:15 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:23:01.333 02:20:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:01.333 02:20:15 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:01.333 02:20:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:01.333 02:20:15 -- common/autotest_common.sh@10 -- # set +x 00:23:01.333 [2024-05-14 02:20:15.717750] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:23:01.333 2024/05/14 02:20:15 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:23:01.333 request: 00:23:01.333 { 00:23:01.333 "method": "bdev_nvme_start_mdns_discovery", 00:23:01.333 "params": { 00:23:01.333 "name": "mdns", 00:23:01.333 "svcname": "_nvme-disc._http", 00:23:01.333 "hostnqn": "nqn.2021-12.io.spdk:test" 00:23:01.333 } 00:23:01.333 } 00:23:01.333 Got JSON-RPC error response 00:23:01.333 GoRPCClient: error on JSON-RPC call 00:23:01.333 02:20:15 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:23:01.333 02:20:15 -- common/autotest_common.sh@643 -- # es=1 00:23:01.333 02:20:15 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:23:01.333 02:20:15 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:23:01.333 02:20:15 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:23:01.333 02:20:15 -- host/mdns_discovery.sh@183 -- # sleep 5 00:23:01.592 [2024-05-14 02:20:16.106415] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:23:01.851 [2024-05-14 02:20:16.206413] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:23:01.851 [2024-05-14 02:20:16.306421] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:01.851 [2024-05-14 02:20:16.306441] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.3) 00:23:01.851 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:01.851 cookie is 0 00:23:01.851 is_local: 1 00:23:01.851 our_own: 0 00:23:01.851 wide_area: 0 00:23:01.851 multicast: 1 00:23:01.851 cached: 1 00:23:01.851 [2024-05-14 02:20:16.406450] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:01.851 [2024-05-14 02:20:16.406471] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.2) 00:23:01.851 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:01.851 cookie is 0 00:23:01.851 is_local: 1 00:23:01.851 our_own: 0 00:23:01.851 wide_area: 0 00:23:01.851 multicast: 1 00:23:01.851 cached: 1 00:23:02.787 [2024-05-14 02:20:17.310832] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:23:02.787 [2024-05-14 02:20:17.310857] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:23:02.787 [2024-05-14 02:20:17.310890] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:03.046 [2024-05-14 02:20:17.397058] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:23:03.046 [2024-05-14 02:20:17.410577] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:03.046 [2024-05-14 02:20:17.410596] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:03.046 [2024-05-14 02:20:17.410613] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:03.046 [2024-05-14 02:20:17.458829] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:23:03.046 [2024-05-14 02:20:17.458867] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:03.046 [2024-05-14 02:20:17.496464] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:23:03.046 [2024-05-14 02:20:17.555718] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:23:03.046 [2024-05-14 02:20:17.555989] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:06.333 02:20:20 -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:23:06.333 02:20:20 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:23:06.333 02:20:20 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:23:06.333 02:20:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:06.333 02:20:20 -- host/mdns_discovery.sh@80 -- # sort 00:23:06.333 02:20:20 -- common/autotest_common.sh@10 -- # set +x 00:23:06.333 02:20:20 -- host/mdns_discovery.sh@80 -- # xargs 00:23:06.333 02:20:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:06.333 02:20:20 -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:23:06.333 02:20:20 -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:23:06.333 02:20:20 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:06.333 02:20:20 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:23:06.333 02:20:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:06.334 02:20:20 -- host/mdns_discovery.sh@76 -- # sort 00:23:06.334 02:20:20 -- common/autotest_common.sh@10 -- # set +x 00:23:06.334 02:20:20 -- host/mdns_discovery.sh@76 -- # xargs 00:23:06.334 02:20:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:06.334 02:20:20 -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:23:06.334 02:20:20 -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:23:06.334 02:20:20 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:06.334 02:20:20 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:06.334 02:20:20 -- host/mdns_discovery.sh@64 -- # sort 00:23:06.334 02:20:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:06.334 02:20:20 -- common/autotest_common.sh@10 -- # set +x 00:23:06.334 02:20:20 -- host/mdns_discovery.sh@64 -- # xargs 00:23:06.334 02:20:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:06.334 02:20:20 -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:06.334 02:20:20 -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:06.334 02:20:20 -- common/autotest_common.sh@640 -- # local es=0 00:23:06.334 02:20:20 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:06.334 02:20:20 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:23:06.334 02:20:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:06.334 02:20:20 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:23:06.334 02:20:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:06.334 02:20:20 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:06.334 02:20:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:06.334 02:20:20 -- common/autotest_common.sh@10 -- # set +x 00:23:06.334 [2024-05-14 02:20:20.902107] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:23:06.334 2024/05/14 02:20:20 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:23:06.334 request: 00:23:06.334 { 00:23:06.334 "method": "bdev_nvme_start_mdns_discovery", 00:23:06.334 "params": { 00:23:06.334 "name": "cdc", 00:23:06.334 "svcname": "_nvme-disc._tcp", 00:23:06.334 "hostnqn": "nqn.2021-12.io.spdk:test" 00:23:06.334 } 00:23:06.334 } 00:23:06.334 Got JSON-RPC error response 00:23:06.334 GoRPCClient: error on JSON-RPC call 00:23:06.334 02:20:20 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:23:06.334 02:20:20 -- common/autotest_common.sh@643 -- # es=1 00:23:06.334 02:20:20 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:23:06.334 02:20:20 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:23:06.334 02:20:20 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:23:06.334 02:20:20 -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:23:06.334 02:20:20 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:06.334 02:20:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:06.334 02:20:20 -- common/autotest_common.sh@10 -- # set +x 00:23:06.334 02:20:20 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:23:06.334 02:20:20 -- host/mdns_discovery.sh@76 -- # sort 00:23:06.334 02:20:20 -- host/mdns_discovery.sh@76 -- # xargs 00:23:06.592 02:20:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:06.592 02:20:20 -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:23:06.592 02:20:20 -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:23:06.592 02:20:20 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:06.592 02:20:20 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:06.592 02:20:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:06.592 02:20:20 -- common/autotest_common.sh@10 -- # set +x 00:23:06.592 02:20:20 -- host/mdns_discovery.sh@64 -- # sort 00:23:06.592 02:20:20 -- host/mdns_discovery.sh@64 -- # xargs 00:23:06.592 02:20:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:06.593 02:20:21 -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:06.593 02:20:21 -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:23:06.593 02:20:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:06.593 02:20:21 -- common/autotest_common.sh@10 -- # set +x 00:23:06.593 02:20:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:06.593 02:20:21 -- host/mdns_discovery.sh@195 -- # trap - SIGINT SIGTERM EXIT 00:23:06.593 02:20:21 -- host/mdns_discovery.sh@197 -- # kill 85425 00:23:06.593 02:20:21 -- host/mdns_discovery.sh@200 -- # wait 85425 00:23:06.593 [2024-05-14 02:20:21.111104] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:23:06.851 02:20:21 -- host/mdns_discovery.sh@201 -- # kill 85515 00:23:06.851 Got SIGTERM, quitting. 00:23:06.852 02:20:21 -- host/mdns_discovery.sh@202 -- # kill 85454 00:23:06.852 02:20:21 -- host/mdns_discovery.sh@203 -- # nvmftestfini 00:23:06.852 Got SIGTERM, quitting. 00:23:06.852 02:20:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:06.852 02:20:21 -- nvmf/common.sh@116 -- # sync 00:23:06.852 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:23:06.852 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:23:06.852 avahi-daemon 0.8 exiting. 00:23:06.852 02:20:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:06.852 02:20:21 -- nvmf/common.sh@119 -- # set +e 00:23:06.852 02:20:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:06.852 02:20:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:06.852 rmmod nvme_tcp 00:23:06.852 rmmod nvme_fabrics 00:23:06.852 rmmod nvme_keyring 00:23:06.852 02:20:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:06.852 02:20:21 -- nvmf/common.sh@123 -- # set -e 00:23:06.852 02:20:21 -- nvmf/common.sh@124 -- # return 0 00:23:06.852 02:20:21 -- nvmf/common.sh@477 -- # '[' -n 85375 ']' 00:23:06.852 02:20:21 -- nvmf/common.sh@478 -- # killprocess 85375 00:23:06.852 02:20:21 -- common/autotest_common.sh@926 -- # '[' -z 85375 ']' 00:23:06.852 02:20:21 -- common/autotest_common.sh@930 -- # kill -0 85375 00:23:06.852 02:20:21 -- common/autotest_common.sh@931 -- # uname 00:23:06.852 02:20:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:06.852 02:20:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 85375 00:23:06.852 02:20:21 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:23:06.852 02:20:21 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:23:06.852 killing process with pid 85375 00:23:06.852 02:20:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 85375' 00:23:06.852 02:20:21 -- common/autotest_common.sh@945 -- # kill 85375 00:23:06.852 02:20:21 -- common/autotest_common.sh@950 -- # wait 85375 00:23:07.110 02:20:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:07.110 02:20:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:07.110 02:20:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:07.110 02:20:21 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:07.110 02:20:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:07.110 02:20:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:07.110 02:20:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:07.110 02:20:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:07.110 02:20:21 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:23:07.110 ************************************ 00:23:07.110 END TEST nvmf_mdns_discovery 00:23:07.110 ************************************ 00:23:07.110 00:23:07.110 real 0m20.755s 00:23:07.110 user 0m40.800s 00:23:07.110 sys 0m2.000s 00:23:07.110 02:20:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:07.110 02:20:21 -- common/autotest_common.sh@10 -- # set +x 00:23:07.110 02:20:21 -- nvmf/nvmf.sh@114 -- # [[ 1 -eq 1 ]] 00:23:07.110 02:20:21 -- nvmf/nvmf.sh@115 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:23:07.110 02:20:21 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:23:07.110 02:20:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:07.110 02:20:21 -- common/autotest_common.sh@10 -- # set +x 00:23:07.110 ************************************ 00:23:07.110 START TEST nvmf_multipath 00:23:07.110 ************************************ 00:23:07.110 02:20:21 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:23:07.369 * Looking for test storage... 00:23:07.369 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:07.369 02:20:21 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:07.369 02:20:21 -- nvmf/common.sh@7 -- # uname -s 00:23:07.369 02:20:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:07.369 02:20:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:07.369 02:20:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:07.369 02:20:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:07.369 02:20:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:07.369 02:20:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:07.369 02:20:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:07.369 02:20:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:07.369 02:20:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:07.369 02:20:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:07.369 02:20:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:23:07.369 02:20:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:23:07.369 02:20:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:07.369 02:20:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:07.369 02:20:21 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:07.369 02:20:21 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:07.369 02:20:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:07.369 02:20:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:07.369 02:20:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:07.369 02:20:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.369 02:20:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.369 02:20:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.369 02:20:21 -- paths/export.sh@5 -- # export PATH 00:23:07.369 02:20:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.369 02:20:21 -- nvmf/common.sh@46 -- # : 0 00:23:07.369 02:20:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:07.369 02:20:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:07.369 02:20:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:07.369 02:20:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:07.369 02:20:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:07.369 02:20:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:07.369 02:20:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:07.369 02:20:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:07.369 02:20:21 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:07.369 02:20:21 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:07.369 02:20:21 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:07.369 02:20:21 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:23:07.369 02:20:21 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:07.369 02:20:21 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:07.369 02:20:21 -- host/multipath.sh@30 -- # nvmftestinit 00:23:07.369 02:20:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:07.369 02:20:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:07.369 02:20:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:07.369 02:20:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:07.369 02:20:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:07.369 02:20:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:07.369 02:20:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:07.369 02:20:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:07.369 02:20:21 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:23:07.369 02:20:21 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:23:07.369 02:20:21 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:23:07.369 02:20:21 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:23:07.369 02:20:21 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:23:07.369 02:20:21 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:23:07.369 02:20:21 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:07.369 02:20:21 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:07.369 02:20:21 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:07.369 02:20:21 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:23:07.369 02:20:21 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:07.369 02:20:21 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:07.369 02:20:21 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:07.369 02:20:21 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:07.369 02:20:21 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:07.370 02:20:21 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:07.370 02:20:21 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:07.370 02:20:21 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:07.370 02:20:21 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:23:07.370 02:20:21 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:23:07.370 Cannot find device "nvmf_tgt_br" 00:23:07.370 02:20:21 -- nvmf/common.sh@154 -- # true 00:23:07.370 02:20:21 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:23:07.370 Cannot find device "nvmf_tgt_br2" 00:23:07.370 02:20:21 -- nvmf/common.sh@155 -- # true 00:23:07.370 02:20:21 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:23:07.370 02:20:21 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:23:07.370 Cannot find device "nvmf_tgt_br" 00:23:07.370 02:20:21 -- nvmf/common.sh@157 -- # true 00:23:07.370 02:20:21 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:23:07.370 Cannot find device "nvmf_tgt_br2" 00:23:07.370 02:20:21 -- nvmf/common.sh@158 -- # true 00:23:07.370 02:20:21 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:23:07.370 02:20:21 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:23:07.370 02:20:21 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:07.370 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:07.370 02:20:21 -- nvmf/common.sh@161 -- # true 00:23:07.370 02:20:21 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:07.370 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:07.370 02:20:21 -- nvmf/common.sh@162 -- # true 00:23:07.370 02:20:21 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:23:07.370 02:20:21 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:07.370 02:20:21 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:07.370 02:20:21 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:07.370 02:20:21 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:07.628 02:20:21 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:07.628 02:20:21 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:07.628 02:20:21 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:07.628 02:20:21 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:07.628 02:20:22 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:23:07.628 02:20:22 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:23:07.628 02:20:22 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:23:07.628 02:20:22 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:23:07.628 02:20:22 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:07.628 02:20:22 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:07.628 02:20:22 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:07.628 02:20:22 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:23:07.628 02:20:22 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:23:07.628 02:20:22 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:23:07.628 02:20:22 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:07.628 02:20:22 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:07.628 02:20:22 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:07.628 02:20:22 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:07.628 02:20:22 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:23:07.628 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:07.628 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:23:07.628 00:23:07.628 --- 10.0.0.2 ping statistics --- 00:23:07.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.628 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:23:07.628 02:20:22 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:23:07.628 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:07.628 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:23:07.628 00:23:07.628 --- 10.0.0.3 ping statistics --- 00:23:07.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.628 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:23:07.628 02:20:22 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:07.628 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:07.628 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:23:07.628 00:23:07.628 --- 10.0.0.1 ping statistics --- 00:23:07.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.628 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:23:07.628 02:20:22 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:07.628 02:20:22 -- nvmf/common.sh@421 -- # return 0 00:23:07.628 02:20:22 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:07.628 02:20:22 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:07.628 02:20:22 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:07.628 02:20:22 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:07.628 02:20:22 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:07.628 02:20:22 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:07.628 02:20:22 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:07.629 02:20:22 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:23:07.629 02:20:22 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:07.629 02:20:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:07.629 02:20:22 -- common/autotest_common.sh@10 -- # set +x 00:23:07.629 02:20:22 -- nvmf/common.sh@469 -- # nvmfpid=86018 00:23:07.629 02:20:22 -- nvmf/common.sh@470 -- # waitforlisten 86018 00:23:07.629 02:20:22 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:07.629 02:20:22 -- common/autotest_common.sh@819 -- # '[' -z 86018 ']' 00:23:07.629 02:20:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:07.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:07.629 02:20:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:07.629 02:20:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:07.629 02:20:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:07.629 02:20:22 -- common/autotest_common.sh@10 -- # set +x 00:23:07.629 [2024-05-14 02:20:22.197936] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:23:07.629 [2024-05-14 02:20:22.198033] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:07.888 [2024-05-14 02:20:22.339740] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:07.888 [2024-05-14 02:20:22.408377] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:07.888 [2024-05-14 02:20:22.408537] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:07.888 [2024-05-14 02:20:22.408549] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:07.888 [2024-05-14 02:20:22.408557] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:07.888 [2024-05-14 02:20:22.408709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:07.888 [2024-05-14 02:20:22.408716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:08.834 02:20:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:08.834 02:20:23 -- common/autotest_common.sh@852 -- # return 0 00:23:08.834 02:20:23 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:08.834 02:20:23 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:08.835 02:20:23 -- common/autotest_common.sh@10 -- # set +x 00:23:08.835 02:20:23 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:08.835 02:20:23 -- host/multipath.sh@33 -- # nvmfapp_pid=86018 00:23:08.835 02:20:23 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:09.094 [2024-05-14 02:20:23.508908] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:09.094 02:20:23 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:09.352 Malloc0 00:23:09.352 02:20:23 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:09.610 02:20:24 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:09.869 02:20:24 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:10.127 [2024-05-14 02:20:24.600123] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:10.127 02:20:24 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:10.386 [2024-05-14 02:20:24.836312] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:10.386 02:20:24 -- host/multipath.sh@44 -- # bdevperf_pid=86122 00:23:10.386 02:20:24 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:10.386 02:20:24 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:10.386 02:20:24 -- host/multipath.sh@47 -- # waitforlisten 86122 /var/tmp/bdevperf.sock 00:23:10.386 02:20:24 -- common/autotest_common.sh@819 -- # '[' -z 86122 ']' 00:23:10.386 02:20:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:10.386 02:20:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:10.386 02:20:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:10.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:10.386 02:20:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:10.386 02:20:24 -- common/autotest_common.sh@10 -- # set +x 00:23:11.321 02:20:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:11.321 02:20:25 -- common/autotest_common.sh@852 -- # return 0 00:23:11.321 02:20:25 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:11.579 02:20:26 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:23:12.145 Nvme0n1 00:23:12.145 02:20:26 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:12.403 Nvme0n1 00:23:12.403 02:20:26 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:12.403 02:20:26 -- host/multipath.sh@78 -- # sleep 1 00:23:13.338 02:20:27 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:23:13.338 02:20:27 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:13.905 02:20:28 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:13.905 02:20:28 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:23:13.905 02:20:28 -- host/multipath.sh@65 -- # dtrace_pid=86215 00:23:13.905 02:20:28 -- host/multipath.sh@66 -- # sleep 6 00:23:13.905 02:20:28 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86018 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:20.466 02:20:34 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:20.466 02:20:34 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:20.466 02:20:34 -- host/multipath.sh@67 -- # active_port=4421 00:23:20.466 02:20:34 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:20.466 Attaching 4 probes... 00:23:20.466 @path[10.0.0.2, 4421]: 16727 00:23:20.466 @path[10.0.0.2, 4421]: 17038 00:23:20.466 @path[10.0.0.2, 4421]: 17090 00:23:20.466 @path[10.0.0.2, 4421]: 17020 00:23:20.466 @path[10.0.0.2, 4421]: 17161 00:23:20.466 02:20:34 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:20.466 02:20:34 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:20.466 02:20:34 -- host/multipath.sh@69 -- # sed -n 1p 00:23:20.466 02:20:34 -- host/multipath.sh@69 -- # port=4421 00:23:20.466 02:20:34 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:20.466 02:20:34 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:20.466 02:20:34 -- host/multipath.sh@72 -- # kill 86215 00:23:20.466 02:20:34 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:20.466 02:20:34 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:23:20.466 02:20:34 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:20.466 02:20:35 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:20.725 02:20:35 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:23:20.725 02:20:35 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86018 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:20.725 02:20:35 -- host/multipath.sh@65 -- # dtrace_pid=86348 00:23:20.725 02:20:35 -- host/multipath.sh@66 -- # sleep 6 00:23:27.291 02:20:41 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:27.291 02:20:41 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:23:27.291 02:20:41 -- host/multipath.sh@67 -- # active_port=4420 00:23:27.291 02:20:41 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:27.291 Attaching 4 probes... 00:23:27.291 @path[10.0.0.2, 4420]: 16580 00:23:27.291 @path[10.0.0.2, 4420]: 17030 00:23:27.291 @path[10.0.0.2, 4420]: 17022 00:23:27.291 @path[10.0.0.2, 4420]: 16684 00:23:27.291 @path[10.0.0.2, 4420]: 16865 00:23:27.291 02:20:41 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:27.291 02:20:41 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:27.291 02:20:41 -- host/multipath.sh@69 -- # sed -n 1p 00:23:27.291 02:20:41 -- host/multipath.sh@69 -- # port=4420 00:23:27.291 02:20:41 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:23:27.291 02:20:41 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:23:27.291 02:20:41 -- host/multipath.sh@72 -- # kill 86348 00:23:27.291 02:20:41 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:27.291 02:20:41 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:23:27.291 02:20:41 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:27.291 02:20:41 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:27.550 02:20:42 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:23:27.550 02:20:42 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86018 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:27.550 02:20:42 -- host/multipath.sh@65 -- # dtrace_pid=86480 00:23:27.550 02:20:42 -- host/multipath.sh@66 -- # sleep 6 00:23:34.112 02:20:48 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:34.112 02:20:48 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:34.112 02:20:48 -- host/multipath.sh@67 -- # active_port=4421 00:23:34.112 02:20:48 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:34.112 Attaching 4 probes... 00:23:34.112 @path[10.0.0.2, 4421]: 12799 00:23:34.112 @path[10.0.0.2, 4421]: 17950 00:23:34.112 @path[10.0.0.2, 4421]: 20271 00:23:34.112 @path[10.0.0.2, 4421]: 20384 00:23:34.112 @path[10.0.0.2, 4421]: 20597 00:23:34.112 02:20:48 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:34.112 02:20:48 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:34.112 02:20:48 -- host/multipath.sh@69 -- # sed -n 1p 00:23:34.112 02:20:48 -- host/multipath.sh@69 -- # port=4421 00:23:34.112 02:20:48 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:34.112 02:20:48 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:34.112 02:20:48 -- host/multipath.sh@72 -- # kill 86480 00:23:34.112 02:20:48 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:34.112 02:20:48 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:23:34.112 02:20:48 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:34.112 02:20:48 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:34.371 02:20:48 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:23:34.371 02:20:48 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86018 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:34.371 02:20:48 -- host/multipath.sh@65 -- # dtrace_pid=86615 00:23:34.371 02:20:48 -- host/multipath.sh@66 -- # sleep 6 00:23:40.967 02:20:54 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:40.967 02:20:54 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:23:40.967 02:20:55 -- host/multipath.sh@67 -- # active_port= 00:23:40.967 02:20:55 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:40.967 Attaching 4 probes... 00:23:40.967 00:23:40.967 00:23:40.967 00:23:40.967 00:23:40.967 00:23:40.967 02:20:55 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:40.967 02:20:55 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:40.967 02:20:55 -- host/multipath.sh@69 -- # sed -n 1p 00:23:40.967 02:20:55 -- host/multipath.sh@69 -- # port= 00:23:40.967 02:20:55 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:23:40.967 02:20:55 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:23:40.967 02:20:55 -- host/multipath.sh@72 -- # kill 86615 00:23:40.967 02:20:55 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:40.967 02:20:55 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:23:40.967 02:20:55 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:40.967 02:20:55 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:41.226 02:20:55 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:23:41.226 02:20:55 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86018 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:41.226 02:20:55 -- host/multipath.sh@65 -- # dtrace_pid=86741 00:23:41.226 02:20:55 -- host/multipath.sh@66 -- # sleep 6 00:23:47.793 02:21:01 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:47.793 02:21:01 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:47.793 02:21:01 -- host/multipath.sh@67 -- # active_port=4421 00:23:47.793 02:21:01 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:47.793 Attaching 4 probes... 00:23:47.793 @path[10.0.0.2, 4421]: 19415 00:23:47.793 @path[10.0.0.2, 4421]: 19467 00:23:47.793 @path[10.0.0.2, 4421]: 19542 00:23:47.793 @path[10.0.0.2, 4421]: 19431 00:23:47.793 @path[10.0.0.2, 4421]: 19072 00:23:47.793 02:21:01 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:47.793 02:21:01 -- host/multipath.sh@69 -- # sed -n 1p 00:23:47.793 02:21:01 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:47.793 02:21:01 -- host/multipath.sh@69 -- # port=4421 00:23:47.793 02:21:01 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:47.793 02:21:01 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:47.793 02:21:01 -- host/multipath.sh@72 -- # kill 86741 00:23:47.793 02:21:01 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:47.794 02:21:01 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:47.794 [2024-05-14 02:21:02.081281] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081335] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081347] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081356] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081364] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081374] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081382] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081390] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081399] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081408] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081416] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081424] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081432] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081440] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081449] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081457] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081465] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081473] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081481] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081489] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081498] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081506] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081514] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081522] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081530] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081538] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081547] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081555] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081563] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081571] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081579] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081587] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081595] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081604] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081612] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081619] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081627] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081635] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081643] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081652] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081660] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081668] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081676] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081684] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081692] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081700] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081708] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081717] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081725] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081733] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081741] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081749] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081757] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081781] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081790] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081798] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081807] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081816] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081824] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081832] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081840] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081849] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081857] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081866] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081874] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081882] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081890] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081898] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081907] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081915] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081923] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081931] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081939] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081947] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081955] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.794 [2024-05-14 02:21:02.081975] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.795 [2024-05-14 02:21:02.081983] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.795 [2024-05-14 02:21:02.081992] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.795 [2024-05-14 02:21:02.082000] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.795 [2024-05-14 02:21:02.082008] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.795 [2024-05-14 02:21:02.082016] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.795 [2024-05-14 02:21:02.082024] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.795 [2024-05-14 02:21:02.082033] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.795 [2024-05-14 02:21:02.082041] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.795 [2024-05-14 02:21:02.082049] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.795 [2024-05-14 02:21:02.082057] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.795 [2024-05-14 02:21:02.082065] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.795 [2024-05-14 02:21:02.082073] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.795 [2024-05-14 02:21:02.082082] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.795 [2024-05-14 02:21:02.082090] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.795 [2024-05-14 02:21:02.082098] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.795 [2024-05-14 02:21:02.082107] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.795 [2024-05-14 02:21:02.082115] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.795 [2024-05-14 02:21:02.082123] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef50b0 is same with the state(5) to be set 00:23:47.795 02:21:02 -- host/multipath.sh@101 -- # sleep 1 00:23:48.733 02:21:03 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:23:48.733 02:21:03 -- host/multipath.sh@65 -- # dtrace_pid=86877 00:23:48.733 02:21:03 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86018 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:48.733 02:21:03 -- host/multipath.sh@66 -- # sleep 6 00:23:55.300 02:21:09 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:55.300 02:21:09 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:23:55.300 02:21:09 -- host/multipath.sh@67 -- # active_port=4420 00:23:55.300 02:21:09 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:55.300 Attaching 4 probes... 00:23:55.300 @path[10.0.0.2, 4420]: 19106 00:23:55.300 @path[10.0.0.2, 4420]: 19308 00:23:55.300 @path[10.0.0.2, 4420]: 16814 00:23:55.300 @path[10.0.0.2, 4420]: 16664 00:23:55.300 @path[10.0.0.2, 4420]: 16447 00:23:55.300 02:21:09 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:55.300 02:21:09 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:55.300 02:21:09 -- host/multipath.sh@69 -- # sed -n 1p 00:23:55.300 02:21:09 -- host/multipath.sh@69 -- # port=4420 00:23:55.300 02:21:09 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:23:55.300 02:21:09 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:23:55.300 02:21:09 -- host/multipath.sh@72 -- # kill 86877 00:23:55.300 02:21:09 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:55.300 02:21:09 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:55.300 [2024-05-14 02:21:09.604999] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:55.300 02:21:09 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:55.300 02:21:09 -- host/multipath.sh@111 -- # sleep 6 00:24:01.864 02:21:15 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:24:01.864 02:21:15 -- host/multipath.sh@65 -- # dtrace_pid=87068 00:24:01.864 02:21:15 -- host/multipath.sh@66 -- # sleep 6 00:24:01.864 02:21:15 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86018 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:08.445 02:21:21 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:08.445 02:21:21 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:24:08.445 02:21:22 -- host/multipath.sh@67 -- # active_port=4421 00:24:08.445 02:21:22 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:08.445 Attaching 4 probes... 00:24:08.445 @path[10.0.0.2, 4421]: 16327 00:24:08.445 @path[10.0.0.2, 4421]: 16686 00:24:08.445 @path[10.0.0.2, 4421]: 16528 00:24:08.445 @path[10.0.0.2, 4421]: 16617 00:24:08.445 @path[10.0.0.2, 4421]: 16567 00:24:08.445 02:21:22 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:08.445 02:21:22 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:08.445 02:21:22 -- host/multipath.sh@69 -- # sed -n 1p 00:24:08.445 02:21:22 -- host/multipath.sh@69 -- # port=4421 00:24:08.445 02:21:22 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:24:08.445 02:21:22 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:24:08.445 02:21:22 -- host/multipath.sh@72 -- # kill 87068 00:24:08.445 02:21:22 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:08.445 02:21:22 -- host/multipath.sh@114 -- # killprocess 86122 00:24:08.445 02:21:22 -- common/autotest_common.sh@926 -- # '[' -z 86122 ']' 00:24:08.445 02:21:22 -- common/autotest_common.sh@930 -- # kill -0 86122 00:24:08.445 02:21:22 -- common/autotest_common.sh@931 -- # uname 00:24:08.445 02:21:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:08.446 02:21:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 86122 00:24:08.446 02:21:22 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:24:08.446 02:21:22 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:24:08.446 killing process with pid 86122 00:24:08.446 02:21:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 86122' 00:24:08.446 02:21:22 -- common/autotest_common.sh@945 -- # kill 86122 00:24:08.446 02:21:22 -- common/autotest_common.sh@950 -- # wait 86122 00:24:08.446 Connection closed with partial response: 00:24:08.446 00:24:08.446 00:24:08.446 02:21:22 -- host/multipath.sh@116 -- # wait 86122 00:24:08.446 02:21:22 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:08.446 [2024-05-14 02:20:24.913509] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:08.446 [2024-05-14 02:20:24.913626] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86122 ] 00:24:08.446 [2024-05-14 02:20:25.054884] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:08.446 [2024-05-14 02:20:25.123718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:08.446 Running I/O for 90 seconds... 00:24:08.446 [2024-05-14 02:20:35.220863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:102960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.446 [2024-05-14 02:20:35.220946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:08.446 [2024-05-14 02:20:35.221004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:102968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.446 [2024-05-14 02:20:35.221027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:08.446 [2024-05-14 02:20:35.221051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:102976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.446 [2024-05-14 02:20:35.221067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:08.446 [2024-05-14 02:20:35.221089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:102336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.446 [2024-05-14 02:20:35.221104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:08.446 [2024-05-14 02:20:35.221126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:102360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.446 [2024-05-14 02:20:35.221142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:08.446 [2024-05-14 02:20:35.221164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:102392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.446 [2024-05-14 02:20:35.221180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:08.446 [2024-05-14 02:20:35.221201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:102400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.446 [2024-05-14 02:20:35.221217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:08.446 [2024-05-14 02:20:35.221239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:102424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.446 [2024-05-14 02:20:35.221254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:08.446 [2024-05-14 02:20:35.221276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:102432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.446 [2024-05-14 02:20:35.221292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:08.446 [2024-05-14 02:20:35.221313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:102440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.446 [2024-05-14 02:20:35.221330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:08.446 [2024-05-14 02:20:35.221352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:102448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.446 [2024-05-14 02:20:35.221381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:08.446 [2024-05-14 02:20:35.221404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:102984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.446 [2024-05-14 02:20:35.221421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:08.446 [2024-05-14 02:20:35.221442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.446 [2024-05-14 02:20:35.221457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:08.446 [2024-05-14 02:20:35.221478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:103000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.446 [2024-05-14 02:20:35.221494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:08.446 [2024-05-14 02:20:35.221515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:103008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.446 [2024-05-14 02:20:35.221531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:08.446 [2024-05-14 02:20:35.221552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.446 [2024-05-14 02:20:35.221568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:08.446 [2024-05-14 02:20:35.221589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:103024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.446 [2024-05-14 02:20:35.221607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:08.446 [2024-05-14 02:20:35.221628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:103032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.446 [2024-05-14 02:20:35.221644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:08.446 [2024-05-14 02:20:35.221665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:102464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.446 [2024-05-14 02:20:35.221681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:08.446 [2024-05-14 02:20:35.221702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:102480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.446 [2024-05-14 02:20:35.221718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:08.446 [2024-05-14 02:20:35.221739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:102496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.446 [2024-05-14 02:20:35.221754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:08.446 [2024-05-14 02:20:35.221790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:102504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.446 [2024-05-14 02:20:35.221807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:08.446 [2024-05-14 02:20:35.221829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:102528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.446 [2024-05-14 02:20:35.221844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:08.446 [2024-05-14 02:20:35.221876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:102536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.446 [2024-05-14 02:20:35.221893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:08.446 [2024-05-14 02:20:35.221915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:102544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.446 [2024-05-14 02:20:35.221931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:08.446 [2024-05-14 02:20:35.221964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:102560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.446 [2024-05-14 02:20:35.221982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:08.446 [2024-05-14 02:20:35.222004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:103040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.446 [2024-05-14 02:20:35.222020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:08.446 [2024-05-14 02:20:35.222042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:103048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.446 [2024-05-14 02:20:35.222058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:08.446 [2024-05-14 02:20:35.222080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:103056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.446 [2024-05-14 02:20:35.222096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:08.446 [2024-05-14 02:20:35.222118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:103064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.446 [2024-05-14 02:20:35.222133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:08.446 [2024-05-14 02:20:35.222155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:103072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.446 [2024-05-14 02:20:35.222171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:08.446 [2024-05-14 02:20:35.222193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:103080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.446 [2024-05-14 02:20:35.222208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:08.446 [2024-05-14 02:20:35.222870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:103088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.446 [2024-05-14 02:20:35.222899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:08.446 [2024-05-14 02:20:35.222927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:103096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.446 [2024-05-14 02:20:35.222944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:08.447 [2024-05-14 02:20:35.222967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:103104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.447 [2024-05-14 02:20:35.222983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.447 [2024-05-14 02:20:35.223018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:103112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.447 [2024-05-14 02:20:35.223035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:08.447 [2024-05-14 02:20:35.223057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:103120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.447 [2024-05-14 02:20:35.223073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:08.447 [2024-05-14 02:20:35.223098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:103128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.447 [2024-05-14 02:20:35.223115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:08.447 [2024-05-14 02:20:35.223136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:103136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.447 [2024-05-14 02:20:35.223153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:08.447 [2024-05-14 02:20:35.223175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:103144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.447 [2024-05-14 02:20:35.223191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:08.447 [2024-05-14 02:20:35.223212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:103152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.447 [2024-05-14 02:20:35.223228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:08.447 [2024-05-14 02:20:35.223250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:103160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.447 [2024-05-14 02:20:35.223266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:08.447 [2024-05-14 02:20:35.223294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:103168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.447 [2024-05-14 02:20:35.223310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:08.447 [2024-05-14 02:20:35.223332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:103176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.447 [2024-05-14 02:20:35.223348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:08.447 [2024-05-14 02:20:35.223370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:103184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.447 [2024-05-14 02:20:35.223386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:08.447 [2024-05-14 02:20:35.223407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:103192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.447 [2024-05-14 02:20:35.223423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:08.447 [2024-05-14 02:20:35.223445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.447 [2024-05-14 02:20:35.223460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:08.447 [2024-05-14 02:20:35.223496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:103208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.447 [2024-05-14 02:20:35.223513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:08.447 [2024-05-14 02:20:35.223534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:103216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.447 [2024-05-14 02:20:35.223550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:08.447 [2024-05-14 02:20:35.223571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:103224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.447 [2024-05-14 02:20:35.223587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:08.447 [2024-05-14 02:20:35.223609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:103232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.447 [2024-05-14 02:20:35.223624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:08.447 [2024-05-14 02:20:35.223646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:103240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.447 [2024-05-14 02:20:35.223661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:08.447 [2024-05-14 02:20:35.223682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:103248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.447 [2024-05-14 02:20:35.223713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:08.447 [2024-05-14 02:20:35.223752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:103256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.447 [2024-05-14 02:20:35.223769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:08.447 [2024-05-14 02:20:35.223805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:103264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.447 [2024-05-14 02:20:35.223824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:08.447 [2024-05-14 02:20:35.223847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:103272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.447 [2024-05-14 02:20:35.223863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:08.447 [2024-05-14 02:20:35.223885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:102576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.447 [2024-05-14 02:20:35.223901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:08.447 [2024-05-14 02:20:35.223922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:102584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.447 [2024-05-14 02:20:35.223938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:08.447 [2024-05-14 02:20:35.223959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:102624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.447 [2024-05-14 02:20:35.223975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:08.447 [2024-05-14 02:20:35.223998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:102632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.447 [2024-05-14 02:20:35.224022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:08.447 [2024-05-14 02:20:35.224044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:102672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.447 [2024-05-14 02:20:35.224060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:08.447 [2024-05-14 02:20:35.224082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:102680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.447 [2024-05-14 02:20:35.224098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:08.447 [2024-05-14 02:20:35.224120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:102696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.447 [2024-05-14 02:20:35.224136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:08.447 [2024-05-14 02:20:35.224157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:102704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.447 [2024-05-14 02:20:35.224173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:08.447 [2024-05-14 02:20:35.224197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:102720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.447 [2024-05-14 02:20:35.224213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:08.447 [2024-05-14 02:20:35.224235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:102728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.447 [2024-05-14 02:20:35.224250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:08.447 [2024-05-14 02:20:35.224272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:102736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.447 [2024-05-14 02:20:35.224288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:08.447 [2024-05-14 02:20:35.224309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:102752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.447 [2024-05-14 02:20:35.224325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:08.447 [2024-05-14 02:20:35.224346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:102760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.447 [2024-05-14 02:20:35.224362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:08.447 [2024-05-14 02:20:35.224386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:102768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.447 [2024-05-14 02:20:35.224403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:08.447 [2024-05-14 02:20:35.224424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:102776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.447 [2024-05-14 02:20:35.224440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:08.447 [2024-05-14 02:20:35.224462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:102824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.447 [2024-05-14 02:20:35.224485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:08.447 [2024-05-14 02:20:35.224507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:102832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.447 [2024-05-14 02:20:35.224523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:08.447 [2024-05-14 02:20:35.224545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:102840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.448 [2024-05-14 02:20:35.224560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:08.448 [2024-05-14 02:20:35.224582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:102856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.448 [2024-05-14 02:20:35.224598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:08.448 [2024-05-14 02:20:35.224620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:102864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.448 [2024-05-14 02:20:35.224636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:08.448 [2024-05-14 02:20:35.224657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:102880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.448 [2024-05-14 02:20:35.224673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:08.448 [2024-05-14 02:20:35.224695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:102888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.448 [2024-05-14 02:20:35.224710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:08.448 [2024-05-14 02:20:35.224732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:102904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.448 [2024-05-14 02:20:35.224748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:08.448 [2024-05-14 02:20:35.224781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:102936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.448 [2024-05-14 02:20:35.224800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:08.448 [2024-05-14 02:20:35.224822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:103280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.448 [2024-05-14 02:20:35.224838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:08.448 [2024-05-14 02:20:35.224859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:103288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.448 [2024-05-14 02:20:35.224875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:08.448 [2024-05-14 02:20:35.224896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:103296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.448 [2024-05-14 02:20:35.224912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:08.448 [2024-05-14 02:20:35.224933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:103304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.448 [2024-05-14 02:20:35.224948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:08.448 [2024-05-14 02:20:35.224978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.448 [2024-05-14 02:20:35.224994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:08.448 [2024-05-14 02:20:35.225018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:103320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.448 [2024-05-14 02:20:35.225035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:08.448 [2024-05-14 02:20:35.225057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:103328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.448 [2024-05-14 02:20:35.225072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:08.448 [2024-05-14 02:20:35.225094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:103336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.448 [2024-05-14 02:20:35.225110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:08.448 [2024-05-14 02:20:35.225131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:103344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.448 [2024-05-14 02:20:35.225147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:08.448 [2024-05-14 02:20:35.225169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:103352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.448 [2024-05-14 02:20:35.225185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:08.448 [2024-05-14 02:20:35.225206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:103360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.448 [2024-05-14 02:20:35.225222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:08.448 [2024-05-14 02:20:35.225245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.448 [2024-05-14 02:20:35.225261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:08.448 [2024-05-14 02:20:35.225282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:103376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.448 [2024-05-14 02:20:35.225298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:08.448 [2024-05-14 02:20:35.225320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:103384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.448 [2024-05-14 02:20:35.225336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:08.448 [2024-05-14 02:20:35.225357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:103392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.448 [2024-05-14 02:20:35.225373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:08.448 [2024-05-14 02:20:35.225395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:103400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.448 [2024-05-14 02:20:35.225411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:08.448 [2024-05-14 02:20:35.226173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:103408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.448 [2024-05-14 02:20:35.226202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:08.448 [2024-05-14 02:20:35.226230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:103416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.448 [2024-05-14 02:20:35.226248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:08.448 [2024-05-14 02:20:35.226270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:103424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.448 [2024-05-14 02:20:35.226286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:08.448 [2024-05-14 02:20:35.226308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:103432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.448 [2024-05-14 02:20:35.226324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:08.448 [2024-05-14 02:20:35.226345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:103440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.448 [2024-05-14 02:20:35.226361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:08.448 [2024-05-14 02:20:35.226383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:103448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.448 [2024-05-14 02:20:35.226399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:08.448 [2024-05-14 02:20:35.226421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:103456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.448 [2024-05-14 02:20:35.226437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:08.448 [2024-05-14 02:20:35.226473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:103464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.448 [2024-05-14 02:20:35.226488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:08.448 [2024-05-14 02:20:35.226509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.448 [2024-05-14 02:20:35.226524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:08.448 [2024-05-14 02:20:35.226546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:103480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.448 [2024-05-14 02:20:35.226562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:08.448 [2024-05-14 02:20:35.226599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:103488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.448 [2024-05-14 02:20:35.226614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:08.448 [2024-05-14 02:20:35.226636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:103496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.448 [2024-05-14 02:20:35.226652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:08.448 [2024-05-14 02:20:35.226683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:103504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.448 [2024-05-14 02:20:35.226701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:08.448 [2024-05-14 02:20:35.226723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:103512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.448 [2024-05-14 02:20:35.226739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:08.448 [2024-05-14 02:20:35.226761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:103520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.448 [2024-05-14 02:20:35.226777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:08.448 [2024-05-14 02:20:35.226815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:103528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.448 [2024-05-14 02:20:35.226835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:08.448 [2024-05-14 02:20:35.226861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.448 [2024-05-14 02:20:35.226878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:08.449 [2024-05-14 02:20:35.226899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:103544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.449 [2024-05-14 02:20:35.226915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:08.449 [2024-05-14 02:20:35.226937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:103552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.449 [2024-05-14 02:20:35.226952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:08.449 [2024-05-14 02:20:35.226974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:103560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.449 [2024-05-14 02:20:35.226989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:08.449 [2024-05-14 02:20:35.227010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.449 [2024-05-14 02:20:35.227026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:08.449 [2024-05-14 02:20:35.227048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:103576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.449 [2024-05-14 02:20:35.227063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:08.449 [2024-05-14 02:20:35.227085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:103584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.449 [2024-05-14 02:20:35.227100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:08.449 [2024-05-14 02:20:35.227122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:103592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.449 [2024-05-14 02:20:35.227137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:08.449 [2024-05-14 02:20:35.227159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:103600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.449 [2024-05-14 02:20:35.227183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:08.449 [2024-05-14 02:20:35.227206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.449 [2024-05-14 02:20:35.227222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:08.449 [2024-05-14 02:20:35.227244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:103616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.449 [2024-05-14 02:20:35.227259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:08.449 [2024-05-14 02:20:35.227280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:103624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.449 [2024-05-14 02:20:35.227296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:08.449 [2024-05-14 02:20:35.227317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:103632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.449 [2024-05-14 02:20:35.227333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:08.449 [2024-05-14 02:20:35.227355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:103640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.449 [2024-05-14 02:20:35.227371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:08.449 [2024-05-14 02:20:35.227392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:103648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.449 [2024-05-14 02:20:35.227408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:08.449 [2024-05-14 02:20:35.227430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:103656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.449 [2024-05-14 02:20:35.227446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:08.449 [2024-05-14 02:20:41.753319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.449 [2024-05-14 02:20:41.753391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:08.449 [2024-05-14 02:20:41.753494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:38000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.449 [2024-05-14 02:20:41.753515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.449 [2024-05-14 02:20:41.753538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:38008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.449 [2024-05-14 02:20:41.753554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:08.449 [2024-05-14 02:20:41.753575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:38016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.449 [2024-05-14 02:20:41.753591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:08.449 [2024-05-14 02:20:41.753615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:38024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.449 [2024-05-14 02:20:41.753684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:08.449 [2024-05-14 02:20:41.753925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:38032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.449 [2024-05-14 02:20:41.753962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:08.449 [2024-05-14 02:20:41.753991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:38040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.449 [2024-05-14 02:20:41.754009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:08.449 [2024-05-14 02:20:41.754031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:38048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.449 [2024-05-14 02:20:41.754046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:08.449 [2024-05-14 02:20:41.754068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:38056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.449 [2024-05-14 02:20:41.754085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:08.449 [2024-05-14 02:20:41.754108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:38064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.449 [2024-05-14 02:20:41.754124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:08.449 [2024-05-14 02:20:41.754156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:38072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.449 [2024-05-14 02:20:41.754171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:08.449 [2024-05-14 02:20:41.754194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:38080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.449 [2024-05-14 02:20:41.754211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:08.449 [2024-05-14 02:20:41.754234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:38088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.449 [2024-05-14 02:20:41.754249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:08.449 [2024-05-14 02:20:41.754272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:38096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.449 [2024-05-14 02:20:41.754288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:08.449 [2024-05-14 02:20:41.754310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.449 [2024-05-14 02:20:41.754326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:08.449 [2024-05-14 02:20:41.754348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:37344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.449 [2024-05-14 02:20:41.754364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:08.449 [2024-05-14 02:20:41.754416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:37360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.449 [2024-05-14 02:20:41.754431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:08.449 [2024-05-14 02:20:41.754463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:37368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.449 [2024-05-14 02:20:41.754480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:08.449 [2024-05-14 02:20:41.754502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:37376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.449 [2024-05-14 02:20:41.754517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:08.449 [2024-05-14 02:20:41.754537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:37384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.449 [2024-05-14 02:20:41.754552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:08.449 [2024-05-14 02:20:41.754573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:37392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.449 [2024-05-14 02:20:41.754605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:08.449 [2024-05-14 02:20:41.754626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:37408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.449 [2024-05-14 02:20:41.754656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:08.449 [2024-05-14 02:20:41.754710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:37416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.449 [2024-05-14 02:20:41.754726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:08.449 [2024-05-14 02:20:41.754747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:37448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.449 [2024-05-14 02:20:41.754762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:08.449 [2024-05-14 02:20:41.754800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:37456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.450 [2024-05-14 02:20:41.754817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:08.450 [2024-05-14 02:20:41.754839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:37480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.450 [2024-05-14 02:20:41.754855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:08.450 [2024-05-14 02:20:41.754889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:37488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.450 [2024-05-14 02:20:41.754908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:08.450 [2024-05-14 02:20:41.754931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:37504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.450 [2024-05-14 02:20:41.754947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:08.450 [2024-05-14 02:20:41.754969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:37512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.450 [2024-05-14 02:20:41.754985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:08.450 [2024-05-14 02:20:41.755015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:37520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.450 [2024-05-14 02:20:41.755033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:08.450 [2024-05-14 02:20:41.755055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:37528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.450 [2024-05-14 02:20:41.755071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:08.450 [2024-05-14 02:20:41.755093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:37560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.450 [2024-05-14 02:20:41.755109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:08.450 [2024-05-14 02:20:41.755131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:37600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.450 [2024-05-14 02:20:41.755147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:08.450 [2024-05-14 02:20:41.755169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:37616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.450 [2024-05-14 02:20:41.755185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:08.450 [2024-05-14 02:20:41.755208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:37624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.450 [2024-05-14 02:20:41.755224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:08.450 [2024-05-14 02:20:41.755246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:37680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.450 [2024-05-14 02:20:41.755262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:08.450 [2024-05-14 02:20:41.755284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:37688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.450 [2024-05-14 02:20:41.755300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:08.450 [2024-05-14 02:20:41.755322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:37696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.450 [2024-05-14 02:20:41.755338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:08.450 [2024-05-14 02:20:41.755375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.450 [2024-05-14 02:20:41.755391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:08.450 [2024-05-14 02:20:41.755412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:38112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.450 [2024-05-14 02:20:41.755428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:08.450 [2024-05-14 02:20:41.755449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:38120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.450 [2024-05-14 02:20:41.755465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:08.450 [2024-05-14 02:20:41.755487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:38128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.450 [2024-05-14 02:20:41.755509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:08.450 [2024-05-14 02:20:41.755531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:38136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.450 [2024-05-14 02:20:41.755563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:08.450 [2024-05-14 02:20:41.755600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:38144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.450 [2024-05-14 02:20:41.755631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:08.450 [2024-05-14 02:20:41.755652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:38152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.450 [2024-05-14 02:20:41.755666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:08.450 [2024-05-14 02:20:41.755687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:38160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.450 [2024-05-14 02:20:41.755701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:08.450 [2024-05-14 02:20:41.755722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:38168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.450 [2024-05-14 02:20:41.755737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:08.450 [2024-05-14 02:20:41.755757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:37712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.450 [2024-05-14 02:20:41.755789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:08.450 [2024-05-14 02:20:41.755811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:37720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.450 [2024-05-14 02:20:41.755826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:08.450 [2024-05-14 02:20:41.755859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:37728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.450 [2024-05-14 02:20:41.755877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:08.450 [2024-05-14 02:20:41.755899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:37744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.450 [2024-05-14 02:20:41.755915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:08.450 [2024-05-14 02:20:41.755937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:37752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.450 [2024-05-14 02:20:41.755953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:08.450 [2024-05-14 02:20:41.755975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:37784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.450 [2024-05-14 02:20:41.755992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:08.450 [2024-05-14 02:20:41.756014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:37792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.450 [2024-05-14 02:20:41.756036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:08.450 [2024-05-14 02:20:41.756060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:37800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.450 [2024-05-14 02:20:41.756076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:08.450 [2024-05-14 02:20:41.756262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.450 [2024-05-14 02:20:41.756282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:08.450 [2024-05-14 02:20:41.756304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.451 [2024-05-14 02:20:41.756320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:08.451 [2024-05-14 02:20:41.756342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:38192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.451 [2024-05-14 02:20:41.756357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:08.451 [2024-05-14 02:20:41.756379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.451 [2024-05-14 02:20:41.756394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:08.451 [2024-05-14 02:20:41.756432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:38208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.451 [2024-05-14 02:20:41.756449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:08.451 [2024-05-14 02:20:41.756471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:38216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.451 [2024-05-14 02:20:41.756487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:08.451 [2024-05-14 02:20:41.756517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:38224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.451 [2024-05-14 02:20:41.756534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:08.451 [2024-05-14 02:20:41.756556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:38232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.451 [2024-05-14 02:20:41.756572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:08.451 [2024-05-14 02:20:41.756595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:38240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.451 [2024-05-14 02:20:41.756610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:08.451 [2024-05-14 02:20:41.756633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.451 [2024-05-14 02:20:41.756649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:08.451 [2024-05-14 02:20:41.756670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:38256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.451 [2024-05-14 02:20:41.756694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:08.451 [2024-05-14 02:20:41.756719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:38264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.451 [2024-05-14 02:20:41.756735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:08.451 [2024-05-14 02:20:41.756757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:38272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.451 [2024-05-14 02:20:41.756773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:08.451 [2024-05-14 02:20:41.756795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:38280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.451 [2024-05-14 02:20:41.756812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:08.451 [2024-05-14 02:20:41.757025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:38288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.451 [2024-05-14 02:20:41.757052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:08.451 [2024-05-14 02:20:41.757083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:38296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.451 [2024-05-14 02:20:41.757100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:08.451 [2024-05-14 02:20:41.757143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:38304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.451 [2024-05-14 02:20:41.757160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:08.451 [2024-05-14 02:20:41.757186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:38312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.451 [2024-05-14 02:20:41.757202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:08.451 [2024-05-14 02:20:41.757228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:38320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.451 [2024-05-14 02:20:41.757244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:08.451 [2024-05-14 02:20:41.757288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:38328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.451 [2024-05-14 02:20:41.757304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:08.451 [2024-05-14 02:20:41.757331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:38336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.451 [2024-05-14 02:20:41.757348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:08.451 [2024-05-14 02:20:41.757376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:38344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.451 [2024-05-14 02:20:41.757392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:08.451 [2024-05-14 02:20:41.757423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:37808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.451 [2024-05-14 02:20:41.757441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:08.451 [2024-05-14 02:20:41.757479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.451 [2024-05-14 02:20:41.757497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:08.451 [2024-05-14 02:20:41.757524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:37848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.451 [2024-05-14 02:20:41.757540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:08.451 [2024-05-14 02:20:41.757568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:37856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.451 [2024-05-14 02:20:41.757593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:08.451 [2024-05-14 02:20:41.757620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:37864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.451 [2024-05-14 02:20:41.757636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:08.451 [2024-05-14 02:20:41.757663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:37880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.451 [2024-05-14 02:20:41.757680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:08.451 [2024-05-14 02:20:41.757707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:37888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.451 [2024-05-14 02:20:41.757724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:08.451 [2024-05-14 02:20:41.757751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:37896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.451 [2024-05-14 02:20:41.757767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:08.451 [2024-05-14 02:20:41.757806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:38352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.451 [2024-05-14 02:20:41.757826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:08.451 [2024-05-14 02:20:41.757854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:38360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.451 [2024-05-14 02:20:41.757870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:08.451 [2024-05-14 02:20:41.757898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:38368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.451 [2024-05-14 02:20:41.757914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:08.451 [2024-05-14 02:20:41.757941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:38376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.451 [2024-05-14 02:20:41.757968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:08.451 [2024-05-14 02:20:41.757997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:38384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.451 [2024-05-14 02:20:41.758014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:08.451 [2024-05-14 02:20:41.758050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:38392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.451 [2024-05-14 02:20:41.758068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:08.451 [2024-05-14 02:20:41.758105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:38400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.451 [2024-05-14 02:20:41.758122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:08.451 [2024-05-14 02:20:41.758149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:38408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.451 [2024-05-14 02:20:41.758165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:08.451 [2024-05-14 02:20:41.758193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:38416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.451 [2024-05-14 02:20:41.758210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:08.451 [2024-05-14 02:20:41.758238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:38424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.451 [2024-05-14 02:20:41.758254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:08.451 [2024-05-14 02:20:41.758281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:38432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.451 [2024-05-14 02:20:41.758297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:08.451 [2024-05-14 02:20:41.758324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:38440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.451 [2024-05-14 02:20:41.758341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:08.452 [2024-05-14 02:20:41.758372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:38448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.452 [2024-05-14 02:20:41.758390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:08.452 [2024-05-14 02:20:41.758417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:38456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.452 [2024-05-14 02:20:41.758433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:08.452 [2024-05-14 02:20:41.758461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:38464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.452 [2024-05-14 02:20:41.758477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:08.452 [2024-05-14 02:20:41.758505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:38472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.452 [2024-05-14 02:20:41.758522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:08.452 [2024-05-14 02:20:41.758549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:38480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.452 [2024-05-14 02:20:41.758565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:08.452 [2024-05-14 02:20:41.758592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:38488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.452 [2024-05-14 02:20:41.758615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:08.452 [2024-05-14 02:20:41.758643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:38496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.452 [2024-05-14 02:20:41.758666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:08.452 [2024-05-14 02:20:41.758693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:38504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.452 [2024-05-14 02:20:41.758710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:08.452 [2024-05-14 02:20:41.758737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:37904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.452 [2024-05-14 02:20:41.758754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:08.452 [2024-05-14 02:20:41.758795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:37912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.452 [2024-05-14 02:20:41.758813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:08.452 [2024-05-14 02:20:41.758840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:37920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.452 [2024-05-14 02:20:41.758857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:08.452 [2024-05-14 02:20:41.758884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:37928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.452 [2024-05-14 02:20:41.758901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:08.452 [2024-05-14 02:20:41.758928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:37936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.452 [2024-05-14 02:20:41.758944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:08.452 [2024-05-14 02:20:41.758971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:37944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.452 [2024-05-14 02:20:41.758988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:08.452 [2024-05-14 02:20:41.759015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:37952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.452 [2024-05-14 02:20:41.759032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:08.452 [2024-05-14 02:20:41.759059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:37976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.452 [2024-05-14 02:20:41.759075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:08.452 [2024-05-14 02:20:41.759105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:38512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.452 [2024-05-14 02:20:41.759122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:08.452 [2024-05-14 02:20:41.759149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:38520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.452 [2024-05-14 02:20:41.759173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:08.452 [2024-05-14 02:20:41.759202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:38528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.452 [2024-05-14 02:20:41.759219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:08.452 [2024-05-14 02:20:41.759246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:38536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.452 [2024-05-14 02:20:41.759263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:08.452 [2024-05-14 02:20:41.759291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:38544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.452 [2024-05-14 02:20:41.759307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:08.452 [2024-05-14 02:20:41.759334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:38552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.452 [2024-05-14 02:20:41.759350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:08.452 [2024-05-14 02:20:41.759377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:38560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.452 [2024-05-14 02:20:41.759393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:08.452 [2024-05-14 02:20:41.759420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:38568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.452 [2024-05-14 02:20:41.759437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:08.452 [2024-05-14 02:20:41.759464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:38576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.452 [2024-05-14 02:20:41.759480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:08.452 [2024-05-14 02:20:41.759507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:38584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.452 [2024-05-14 02:20:41.759523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:08.452 [2024-05-14 02:20:41.759550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:38592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.452 [2024-05-14 02:20:41.759566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:08.452 [2024-05-14 02:20:41.759594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:38600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.452 [2024-05-14 02:20:41.759610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:08.452 [2024-05-14 02:20:48.752291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:2416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.452 [2024-05-14 02:20:48.752759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:08.452 [2024-05-14 02:20:48.752942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.452 [2024-05-14 02:20:48.753050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:08.452 [2024-05-14 02:20:48.753235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.452 [2024-05-14 02:20:48.753313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:08.452 [2024-05-14 02:20:48.753395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.452 [2024-05-14 02:20:48.753478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:08.452 [2024-05-14 02:20:48.753589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:2448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.452 [2024-05-14 02:20:48.753702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:08.452 [2024-05-14 02:20:48.753789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.452 [2024-05-14 02:20:48.753874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:08.452 [2024-05-14 02:20:48.754024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:2464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.452 [2024-05-14 02:20:48.754111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:08.452 [2024-05-14 02:20:48.754198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:2472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.452 [2024-05-14 02:20:48.754293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:08.452 [2024-05-14 02:20:48.754400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.452 [2024-05-14 02:20:48.754482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:08.452 [2024-05-14 02:20:48.754800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:2488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.452 [2024-05-14 02:20:48.754924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:08.452 [2024-05-14 02:20:48.755016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.452 [2024-05-14 02:20:48.755102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:08.452 [2024-05-14 02:20:48.755225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.452 [2024-05-14 02:20:48.755301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:08.453 [2024-05-14 02:20:48.755379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.453 [2024-05-14 02:20:48.755460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:08.453 [2024-05-14 02:20:48.755543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.453 [2024-05-14 02:20:48.755622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:08.453 [2024-05-14 02:20:48.755720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.453 [2024-05-14 02:20:48.755828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:08.453 [2024-05-14 02:20:48.755912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:2536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.453 [2024-05-14 02:20:48.755999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:08.453 [2024-05-14 02:20:48.756075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:2544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.453 [2024-05-14 02:20:48.756187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:08.453 [2024-05-14 02:20:48.756272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.453 [2024-05-14 02:20:48.756348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:08.453 [2024-05-14 02:20:48.756431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.453 [2024-05-14 02:20:48.756510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:08.453 [2024-05-14 02:20:48.756595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:1896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.453 [2024-05-14 02:20:48.756691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:08.453 [2024-05-14 02:20:48.756793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.453 [2024-05-14 02:20:48.756897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:08.453 [2024-05-14 02:20:48.757000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.453 [2024-05-14 02:20:48.757089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:08.453 [2024-05-14 02:20:48.757213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:1944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.453 [2024-05-14 02:20:48.757290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:08.453 [2024-05-14 02:20:48.757369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.453 [2024-05-14 02:20:48.757436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:08.453 [2024-05-14 02:20:48.757505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:1960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.453 [2024-05-14 02:20:48.757571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:08.453 [2024-05-14 02:20:48.757654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:1976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.453 [2024-05-14 02:20:48.757738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:08.453 [2024-05-14 02:20:48.757869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.453 [2024-05-14 02:20:48.757991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:08.453 [2024-05-14 02:20:48.758079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:2568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.453 [2024-05-14 02:20:48.758164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:08.453 [2024-05-14 02:20:48.758278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.453 [2024-05-14 02:20:48.758362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:08.453 [2024-05-14 02:20:48.758442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:2584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.453 [2024-05-14 02:20:48.758518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:08.453 [2024-05-14 02:20:48.758598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:2592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.453 [2024-05-14 02:20:48.758676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:08.453 [2024-05-14 02:20:48.758759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:2600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.453 [2024-05-14 02:20:48.758892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:08.453 [2024-05-14 02:20:48.758982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.453 [2024-05-14 02:20:48.759067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:08.453 [2024-05-14 02:20:48.759186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:2616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.453 [2024-05-14 02:20:48.759265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:08.453 [2024-05-14 02:20:48.759354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.453 [2024-05-14 02:20:48.759421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:08.453 [2024-05-14 02:20:48.759509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:2632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.453 [2024-05-14 02:20:48.759585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:08.453 [2024-05-14 02:20:48.759671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.453 [2024-05-14 02:20:48.759747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:08.453 [2024-05-14 02:20:48.759862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.453 [2024-05-14 02:20:48.759934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:08.453 [2024-05-14 02:20:48.760021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:2024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.453 [2024-05-14 02:20:48.760113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:08.453 [2024-05-14 02:20:48.760201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.453 [2024-05-14 02:20:48.760281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:08.453 [2024-05-14 02:20:48.760353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.453 [2024-05-14 02:20:48.760427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:08.453 [2024-05-14 02:20:48.760511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:2064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.453 [2024-05-14 02:20:48.760587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:08.453 [2024-05-14 02:20:48.760691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:2112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.453 [2024-05-14 02:20:48.760787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:08.453 [2024-05-14 02:20:48.760903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.453 [2024-05-14 02:20:48.761002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:08.453 [2024-05-14 02:20:48.761088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.453 [2024-05-14 02:20:48.761159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:08.453 [2024-05-14 02:20:48.761551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:2648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.453 [2024-05-14 02:20:48.761661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:08.453 [2024-05-14 02:20:48.761745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.453 [2024-05-14 02:20:48.761897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:08.453 [2024-05-14 02:20:48.762010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.453 [2024-05-14 02:20:48.762095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:08.453 [2024-05-14 02:20:48.762185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.453 [2024-05-14 02:20:48.762284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:08.453 [2024-05-14 02:20:48.762402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.453 [2024-05-14 02:20:48.762481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:08.453 [2024-05-14 02:20:48.762560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:2688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.453 [2024-05-14 02:20:48.762635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:08.453 [2024-05-14 02:20:48.762733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:2696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.453 [2024-05-14 02:20:48.762829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:08.453 [2024-05-14 02:20:48.762943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.454 [2024-05-14 02:20:48.763026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:08.454 [2024-05-14 02:20:48.763119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.454 [2024-05-14 02:20:48.763212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:08.454 [2024-05-14 02:20:48.763313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.454 [2024-05-14 02:20:48.763414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:08.454 [2024-05-14 02:20:48.763498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:2728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.454 [2024-05-14 02:20:48.763564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:08.454 [2024-05-14 02:20:48.763648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:2736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.454 [2024-05-14 02:20:48.763760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:08.454 [2024-05-14 02:20:48.763874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.454 [2024-05-14 02:20:48.763962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:08.454 [2024-05-14 02:20:48.764062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.454 [2024-05-14 02:20:48.764137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:08.454 [2024-05-14 02:20:48.764240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:2760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.454 [2024-05-14 02:20:48.764318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:08.454 [2024-05-14 02:20:48.764396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:2192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.454 [2024-05-14 02:20:48.764468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:08.454 [2024-05-14 02:20:48.764548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.454 [2024-05-14 02:20:48.764625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:08.454 [2024-05-14 02:20:48.764695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:2248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.454 [2024-05-14 02:20:48.764767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:08.454 [2024-05-14 02:20:48.764875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:2256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.454 [2024-05-14 02:20:48.764967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:08.454 [2024-05-14 02:20:48.765058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:2264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.454 [2024-05-14 02:20:48.765158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:08.454 [2024-05-14 02:20:48.765235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:2272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.454 [2024-05-14 02:20:48.765299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:08.454 [2024-05-14 02:20:48.765379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:2296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.454 [2024-05-14 02:20:48.765455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:08.454 [2024-05-14 02:20:48.765554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:2312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.454 [2024-05-14 02:20:48.765633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:08.454 [2024-05-14 02:20:48.765711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:2768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.454 [2024-05-14 02:20:48.765818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:08.454 [2024-05-14 02:20:48.765932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:2776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.454 [2024-05-14 02:20:48.766075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:08.454 [2024-05-14 02:20:48.766154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.454 [2024-05-14 02:20:48.766244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:08.454 [2024-05-14 02:20:48.766333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:2792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.454 [2024-05-14 02:20:48.766453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:08.454 [2024-05-14 02:20:48.766531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:2800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.454 [2024-05-14 02:20:48.766604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:08.454 [2024-05-14 02:20:48.766673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.454 [2024-05-14 02:20:48.766747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:08.454 [2024-05-14 02:20:48.766841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.454 [2024-05-14 02:20:48.766928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:08.454 [2024-05-14 02:20:48.767000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.454 [2024-05-14 02:20:48.767088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:08.454 [2024-05-14 02:20:48.767170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.454 [2024-05-14 02:20:48.767246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:08.454 [2024-05-14 02:20:48.767315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:2840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.454 [2024-05-14 02:20:48.767388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:08.454 [2024-05-14 02:20:48.767465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:2848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.454 [2024-05-14 02:20:48.767539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:08.454 [2024-05-14 02:20:48.767617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.454 [2024-05-14 02:20:48.767707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:08.454 [2024-05-14 02:20:48.767833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.454 [2024-05-14 02:20:48.767921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:08.454 [2024-05-14 02:20:48.767996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.454 [2024-05-14 02:20:48.768087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:08.454 [2024-05-14 02:20:48.768159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.454 [2024-05-14 02:20:48.768253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:08.454 [2024-05-14 02:20:48.768331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:2888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.454 [2024-05-14 02:20:48.768407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:08.454 [2024-05-14 02:20:48.768476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:2896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.454 [2024-05-14 02:20:48.768551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.454 [2024-05-14 02:20:48.768650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:2904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.454 [2024-05-14 02:20:48.768743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:08.454 [2024-05-14 02:20:48.768816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.454 [2024-05-14 02:20:48.768927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:08.455 [2024-05-14 02:20:48.769023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.455 [2024-05-14 02:20:48.769110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:08.455 [2024-05-14 02:20:48.769241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.455 [2024-05-14 02:20:48.769319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:08.455 [2024-05-14 02:20:48.769398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:2936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.455 [2024-05-14 02:20:48.769463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:08.455 [2024-05-14 02:20:48.769547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:2944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.455 [2024-05-14 02:20:48.769620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:08.455 [2024-05-14 02:20:48.769689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:2952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.455 [2024-05-14 02:20:48.769752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:08.455 [2024-05-14 02:20:48.769847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.455 [2024-05-14 02:20:48.769921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:08.455 [2024-05-14 02:20:48.770068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.455 [2024-05-14 02:20:48.770218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:08.455 [2024-05-14 02:20:48.770329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:2976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.455 [2024-05-14 02:20:48.770445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:08.455 [2024-05-14 02:20:48.770520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.455 [2024-05-14 02:20:48.770609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:08.455 [2024-05-14 02:20:48.770693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:2992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.455 [2024-05-14 02:20:48.770763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:08.455 [2024-05-14 02:20:48.770859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:3000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.455 [2024-05-14 02:20:48.770951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:08.455 [2024-05-14 02:20:48.771028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:3008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.455 [2024-05-14 02:20:48.771131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:08.455 [2024-05-14 02:20:48.771226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:3016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.455 [2024-05-14 02:20:48.771293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:08.455 [2024-05-14 02:20:48.771388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:2320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.455 [2024-05-14 02:20:48.771464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:08.455 [2024-05-14 02:20:48.771555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:2328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.455 [2024-05-14 02:20:48.771622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:08.455 [2024-05-14 02:20:48.771724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:2336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.455 [2024-05-14 02:20:48.771807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:08.455 [2024-05-14 02:20:48.771904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:2344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.455 [2024-05-14 02:20:48.771988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:08.455 [2024-05-14 02:20:48.772064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:2352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.455 [2024-05-14 02:20:48.772167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:08.455 [2024-05-14 02:20:48.772250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.455 [2024-05-14 02:20:48.772319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:08.455 [2024-05-14 02:20:48.773536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:2376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.455 [2024-05-14 02:20:48.773632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:08.455 [2024-05-14 02:20:48.773733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:2408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.455 [2024-05-14 02:20:48.773838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:08.455 [2024-05-14 02:20:48.773938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:3024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.455 [2024-05-14 02:20:48.774049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:08.455 [2024-05-14 02:20:48.774142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:3032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.455 [2024-05-14 02:20:48.774225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:08.455 [2024-05-14 02:20:48.774329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:3040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.455 [2024-05-14 02:20:48.774414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:08.455 [2024-05-14 02:20:48.774495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:3048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.455 [2024-05-14 02:20:48.774574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:08.455 [2024-05-14 02:20:48.774678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.455 [2024-05-14 02:20:48.774779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:08.455 [2024-05-14 02:20:48.774893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:3064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.455 [2024-05-14 02:20:48.774969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:08.455 [2024-05-14 02:20:48.775065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:3072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.455 [2024-05-14 02:20:48.775150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:08.455 [2024-05-14 02:20:48.775226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:3080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.455 [2024-05-14 02:20:48.775320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:08.455 [2024-05-14 02:20:48.775408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.455 [2024-05-14 02:20:48.775491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:08.455 [2024-05-14 02:20:48.775574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.455 [2024-05-14 02:20:48.775671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:08.455 [2024-05-14 02:20:48.775747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.455 [2024-05-14 02:20:48.775845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:08.455 [2024-05-14 02:20:48.775952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.455 [2024-05-14 02:20:48.776045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:08.455 [2024-05-14 02:20:48.776130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.455 [2024-05-14 02:20:48.776211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:08.455 [2024-05-14 02:20:48.776287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.455 [2024-05-14 02:20:48.776368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:08.455 [2024-05-14 02:20:48.776459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.455 [2024-05-14 02:20:48.776546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:08.455 [2024-05-14 02:20:48.776622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.455 [2024-05-14 02:20:48.776694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:08.455 [2024-05-14 02:20:48.776797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.455 [2024-05-14 02:20:48.776899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:08.455 [2024-05-14 02:20:48.776977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:3088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.455 [2024-05-14 02:20:48.777071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:08.455 [2024-05-14 02:20:48.777162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:3096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.455 [2024-05-14 02:20:48.777234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:08.455 [2024-05-14 02:20:48.777319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:3104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.456 [2024-05-14 02:20:48.777400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:08.456 [2024-05-14 02:20:48.777486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:3112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.456 [2024-05-14 02:20:48.777557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:08.456 [2024-05-14 02:20:48.777647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:3120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.456 [2024-05-14 02:20:48.777730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:08.456 [2024-05-14 02:20:48.777843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:3128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.456 [2024-05-14 02:20:48.777917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:08.456 [2024-05-14 02:20:48.778030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:3136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.456 [2024-05-14 02:20:48.778115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:08.456 [2024-05-14 02:20:48.778151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:3144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.456 [2024-05-14 02:20:48.778170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:08.456 [2024-05-14 02:20:48.778192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:3152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.456 [2024-05-14 02:20:48.778208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:08.456 [2024-05-14 02:20:48.778230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:3160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.456 [2024-05-14 02:20:48.778246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:08.456 [2024-05-14 02:20:48.778268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:3168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.456 [2024-05-14 02:20:48.778284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:08.456 [2024-05-14 02:20:48.778307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:3176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.456 [2024-05-14 02:20:48.778323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:08.456 [2024-05-14 02:20:48.778782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.456 [2024-05-14 02:20:48.778834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:08.456 [2024-05-14 02:20:48.778865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:2496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.456 [2024-05-14 02:20:48.778883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:08.456 [2024-05-14 02:20:48.778912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.456 [2024-05-14 02:20:48.778929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:08.456 [2024-05-14 02:20:48.778951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.456 [2024-05-14 02:20:48.778968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:08.456 [2024-05-14 02:20:48.778989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.456 [2024-05-14 02:20:48.779006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:08.456 [2024-05-14 02:20:48.779028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:2528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.456 [2024-05-14 02:20:48.779044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:08.456 [2024-05-14 02:20:48.779067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:2536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.456 [2024-05-14 02:20:48.779083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:08.456 [2024-05-14 02:20:48.779105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:2544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.456 [2024-05-14 02:20:48.779121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:08.456 [2024-05-14 02:20:48.779143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:2552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.456 [2024-05-14 02:20:48.779160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:08.456 [2024-05-14 02:20:48.779182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.456 [2024-05-14 02:20:48.779198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:08.456 [2024-05-14 02:20:48.779221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:1896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.456 [2024-05-14 02:20:48.779237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:08.456 [2024-05-14 02:20:48.779259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:1912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.456 [2024-05-14 02:20:48.779275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:08.456 [2024-05-14 02:20:48.779309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:1936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.456 [2024-05-14 02:20:48.779327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:08.456 [2024-05-14 02:20:48.779349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:1944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.456 [2024-05-14 02:20:48.779365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:08.456 [2024-05-14 02:20:48.779387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:1952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.456 [2024-05-14 02:20:48.779405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:08.456 [2024-05-14 02:20:48.779428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.456 [2024-05-14 02:20:48.779446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:08.456 [2024-05-14 02:20:48.779468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.456 [2024-05-14 02:20:48.779484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:08.456 [2024-05-14 02:20:48.779507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.456 [2024-05-14 02:20:48.779523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:08.456 [2024-05-14 02:20:48.779545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.456 [2024-05-14 02:20:48.779562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:08.456 [2024-05-14 02:20:48.779583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.456 [2024-05-14 02:20:48.779600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:08.456 [2024-05-14 02:20:48.779621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:2584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.456 [2024-05-14 02:20:48.779638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:08.456 [2024-05-14 02:20:48.779660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:2592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.456 [2024-05-14 02:20:48.779676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:08.456 [2024-05-14 02:20:48.779698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:2600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.456 [2024-05-14 02:20:48.779714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:08.456 [2024-05-14 02:20:48.779736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:2608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.456 [2024-05-14 02:20:48.779752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:08.456 [2024-05-14 02:20:48.779788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:2616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.456 [2024-05-14 02:20:48.779814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:08.456 [2024-05-14 02:20:48.779838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.456 [2024-05-14 02:20:48.779855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:08.456 [2024-05-14 02:20:48.779877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:2632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.456 [2024-05-14 02:20:48.779893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:08.456 [2024-05-14 02:20:48.779915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.456 [2024-05-14 02:20:48.779931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:08.456 [2024-05-14 02:20:48.779953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:2000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.456 [2024-05-14 02:20:48.779979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:08.456 [2024-05-14 02:20:48.780001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:2024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.456 [2024-05-14 02:20:48.780017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:08.456 [2024-05-14 02:20:48.780039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:2032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.456 [2024-05-14 02:20:48.780055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:08.457 [2024-05-14 02:20:48.780078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.457 [2024-05-14 02:20:48.780095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:08.457 [2024-05-14 02:20:48.780117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:2064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.457 [2024-05-14 02:20:48.780133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:08.457 [2024-05-14 02:20:48.780155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:2112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.457 [2024-05-14 02:20:48.780171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:08.457 [2024-05-14 02:20:48.780194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.457 [2024-05-14 02:20:48.780210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:08.457 [2024-05-14 02:20:48.780232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.457 [2024-05-14 02:20:48.780248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:08.457 [2024-05-14 02:20:48.780271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.457 [2024-05-14 02:20:48.780294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:08.457 [2024-05-14 02:20:48.780317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:2656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.457 [2024-05-14 02:20:48.780349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:08.457 [2024-05-14 02:20:48.780369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.457 [2024-05-14 02:20:48.780385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:08.457 [2024-05-14 02:20:48.780421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.457 [2024-05-14 02:20:48.780436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:08.457 [2024-05-14 02:20:48.780456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.457 [2024-05-14 02:20:48.780471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:08.457 [2024-05-14 02:20:48.780491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:2688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.457 [2024-05-14 02:20:48.780507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:08.457 [2024-05-14 02:20:48.780527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:2696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.457 [2024-05-14 02:20:48.780542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:08.457 [2024-05-14 02:20:48.780562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.457 [2024-05-14 02:20:48.780578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:08.457 [2024-05-14 02:20:48.780599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.457 [2024-05-14 02:20:48.780614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:08.457 [2024-05-14 02:20:48.780650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.457 [2024-05-14 02:20:48.780666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:08.457 [2024-05-14 02:20:48.780688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:2728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.457 [2024-05-14 02:20:48.780704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:08.457 [2024-05-14 02:20:48.780726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:2736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.457 [2024-05-14 02:20:48.780742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:08.457 [2024-05-14 02:20:48.780764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.457 [2024-05-14 02:20:48.780780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:08.457 [2024-05-14 02:20:48.780820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:2752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.457 [2024-05-14 02:20:48.780840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:08.457 [2024-05-14 02:20:48.780863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.457 [2024-05-14 02:20:48.780879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:08.457 [2024-05-14 02:20:48.780901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:2192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.457 [2024-05-14 02:20:48.780917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:08.457 [2024-05-14 02:20:48.780939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:2216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.457 [2024-05-14 02:20:48.780956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:08.457 [2024-05-14 02:20:48.780978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.457 [2024-05-14 02:20:48.780994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:08.457 [2024-05-14 02:20:48.781015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:2256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.457 [2024-05-14 02:20:48.781032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:08.457 [2024-05-14 02:20:48.781054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.457 [2024-05-14 02:20:48.781070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:08.457 [2024-05-14 02:20:48.781092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.457 [2024-05-14 02:20:48.781108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:08.457 [2024-05-14 02:20:48.781130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:2296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.457 [2024-05-14 02:20:48.781161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:08.457 [2024-05-14 02:20:48.781182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:2312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.457 [2024-05-14 02:20:48.781197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:08.457 [2024-05-14 02:20:48.781217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:2768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.457 [2024-05-14 02:20:48.781232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:08.457 [2024-05-14 02:20:48.781253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.457 [2024-05-14 02:20:48.781268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:08.457 [2024-05-14 02:20:48.781302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.457 [2024-05-14 02:20:48.781319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:08.457 [2024-05-14 02:20:48.781340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.457 [2024-05-14 02:20:48.781356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:08.457 [2024-05-14 02:20:48.781376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:2800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.457 [2024-05-14 02:20:48.781392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:08.457 [2024-05-14 02:20:48.781412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:2808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.457 [2024-05-14 02:20:48.781427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:08.457 [2024-05-14 02:20:48.781447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.457 [2024-05-14 02:20:48.781463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:08.457 [2024-05-14 02:20:48.781483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:2824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.457 [2024-05-14 02:20:48.781498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:08.457 [2024-05-14 02:20:48.781518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:2832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.457 [2024-05-14 02:20:48.781533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:08.457 [2024-05-14 02:20:48.781553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:2840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.457 [2024-05-14 02:20:48.781568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:08.457 [2024-05-14 02:20:48.781589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:2848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.457 [2024-05-14 02:20:48.781604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:08.457 [2024-05-14 02:20:48.781624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.457 [2024-05-14 02:20:48.781655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:08.458 [2024-05-14 02:20:48.781677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:2864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.458 [2024-05-14 02:20:48.781693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:08.458 [2024-05-14 02:20:48.781724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.458 [2024-05-14 02:20:48.781740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:08.458 [2024-05-14 02:20:48.781762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:2880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.458 [2024-05-14 02:20:48.781788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:08.458 [2024-05-14 02:20:48.781825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.458 [2024-05-14 02:20:48.781842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:08.458 [2024-05-14 02:20:48.781864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:2896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.458 [2024-05-14 02:20:48.781881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.458 [2024-05-14 02:20:48.781903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.458 [2024-05-14 02:20:48.781920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:08.458 [2024-05-14 02:20:48.781942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:2912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.458 [2024-05-14 02:20:48.781971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:08.458 [2024-05-14 02:20:48.781995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.458 [2024-05-14 02:20:48.782017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:08.458 [2024-05-14 02:20:48.782039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:2928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.458 [2024-05-14 02:20:48.782056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:08.458 [2024-05-14 02:20:48.782078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:2936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.458 [2024-05-14 02:20:48.782094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:08.458 [2024-05-14 02:20:48.782116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:2944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.458 [2024-05-14 02:20:48.782132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:08.458 [2024-05-14 02:20:48.782153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:2952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.458 [2024-05-14 02:20:48.782169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:08.458 [2024-05-14 02:20:48.782191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.458 [2024-05-14 02:20:48.782207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:08.458 [2024-05-14 02:20:48.782229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.458 [2024-05-14 02:20:48.782245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:08.458 [2024-05-14 02:20:48.782268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:2976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.458 [2024-05-14 02:20:48.782307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:08.458 [2024-05-14 02:20:48.782330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.458 [2024-05-14 02:20:48.782357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:08.458 [2024-05-14 02:20:48.782377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.458 [2024-05-14 02:20:48.782392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:08.458 [2024-05-14 02:20:48.782413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:3000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.458 [2024-05-14 02:20:48.782428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:08.458 [2024-05-14 02:20:48.782449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:3008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.458 [2024-05-14 02:20:48.782480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:08.458 [2024-05-14 02:20:48.782501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:3016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.458 [2024-05-14 02:20:48.782516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:08.458 [2024-05-14 02:20:48.782538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:2320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.458 [2024-05-14 02:20:48.782553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:08.458 [2024-05-14 02:20:48.782574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.458 [2024-05-14 02:20:48.782590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:08.458 [2024-05-14 02:20:48.782611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.458 [2024-05-14 02:20:48.782627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:08.458 [2024-05-14 02:20:48.782665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:2344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.458 [2024-05-14 02:20:48.782684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:08.458 [2024-05-14 02:20:48.782707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.458 [2024-05-14 02:20:48.782723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:08.458 [2024-05-14 02:20:48.783763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:2368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.458 [2024-05-14 02:20:48.783792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:08.458 [2024-05-14 02:20:48.783820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:2376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.458 [2024-05-14 02:20:48.783850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:08.458 [2024-05-14 02:20:48.783889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:2408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.458 [2024-05-14 02:20:48.783907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:08.458 [2024-05-14 02:20:48.783929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.458 [2024-05-14 02:20:48.783946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:08.458 [2024-05-14 02:20:48.783968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:3032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.458 [2024-05-14 02:20:48.783984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:08.458 [2024-05-14 02:20:48.784005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.458 [2024-05-14 02:20:48.784022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:08.458 [2024-05-14 02:20:48.784043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:3048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.458 [2024-05-14 02:20:48.784059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:08.458 [2024-05-14 02:20:48.784081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:3056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.458 [2024-05-14 02:20:48.784097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:08.458 [2024-05-14 02:20:48.784119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:3064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.458 [2024-05-14 02:20:48.784135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:08.458 [2024-05-14 02:20:48.784171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:3072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.458 [2024-05-14 02:20:48.784202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:08.459 [2024-05-14 02:20:48.784222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:3080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.459 [2024-05-14 02:20:48.784238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:08.459 [2024-05-14 02:20:48.784259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:2416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.459 [2024-05-14 02:20:48.784291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:08.459 [2024-05-14 02:20:48.784312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.459 [2024-05-14 02:20:48.784328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:08.459 [2024-05-14 02:20:48.784348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.459 [2024-05-14 02:20:48.784364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:08.459 [2024-05-14 02:20:48.784385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.459 [2024-05-14 02:20:48.784410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:08.459 [2024-05-14 02:20:48.784432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.459 [2024-05-14 02:20:48.784448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:08.459 [2024-05-14 02:20:48.784469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.459 [2024-05-14 02:20:48.784501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:08.459 [2024-05-14 02:20:48.784522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.459 [2024-05-14 02:20:48.784539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:08.459 [2024-05-14 02:20:48.784560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.459 [2024-05-14 02:20:48.784576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:08.459 [2024-05-14 02:20:48.784604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.459 [2024-05-14 02:20:48.784650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:08.459 [2024-05-14 02:20:48.784672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:3088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.459 [2024-05-14 02:20:48.784688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:08.459 [2024-05-14 02:20:48.784710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:3096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.459 [2024-05-14 02:20:48.784726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:08.459 [2024-05-14 02:20:48.784747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:3104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.459 [2024-05-14 02:20:48.784764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:08.459 [2024-05-14 02:20:48.784785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:3112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.459 [2024-05-14 02:20:48.784801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:08.459 [2024-05-14 02:20:48.784823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.459 [2024-05-14 02:20:48.784839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:08.459 [2024-05-14 02:20:48.784873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:3128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.459 [2024-05-14 02:20:48.784891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:08.459 [2024-05-14 02:20:48.784913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:3136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.459 [2024-05-14 02:20:48.784938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:08.459 [2024-05-14 02:20:48.784961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:3144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.459 [2024-05-14 02:20:48.784978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:08.459 [2024-05-14 02:20:48.784999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:3152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.459 [2024-05-14 02:20:48.785016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:08.459 [2024-05-14 02:20:48.785037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:3160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.459 [2024-05-14 02:20:48.785054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:08.459 [2024-05-14 02:20:48.785075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:3168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.459 [2024-05-14 02:20:48.785094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:08.459 [2024-05-14 02:20:48.785483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:3176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.459 [2024-05-14 02:20:48.785509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:08.459 [2024-05-14 02:20:48.785535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.459 [2024-05-14 02:20:48.785551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:08.459 [2024-05-14 02:20:48.785584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.459 [2024-05-14 02:20:48.785600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:08.459 [2024-05-14 02:20:48.785620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.459 [2024-05-14 02:20:48.785647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:08.459 [2024-05-14 02:20:48.785684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.459 [2024-05-14 02:20:48.785700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:08.459 [2024-05-14 02:20:48.785722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.459 [2024-05-14 02:20:48.785738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:08.459 [2024-05-14 02:20:48.785760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:2528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.459 [2024-05-14 02:20:48.785776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:08.459 [2024-05-14 02:20:48.785797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:2536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.459 [2024-05-14 02:20:48.785823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:08.459 [2024-05-14 02:20:48.785870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.459 [2024-05-14 02:20:48.785893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:08.459 [2024-05-14 02:20:48.785916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.459 [2024-05-14 02:20:48.785932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:08.459 [2024-05-14 02:20:48.785964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:1880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.459 [2024-05-14 02:20:48.785983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:08.459 [2024-05-14 02:20:48.786005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.459 [2024-05-14 02:20:48.786022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:08.459 [2024-05-14 02:20:48.786043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.459 [2024-05-14 02:20:48.786059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:08.459 [2024-05-14 02:20:48.786081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:1936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.459 [2024-05-14 02:20:48.786097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:08.459 [2024-05-14 02:20:48.786119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.459 [2024-05-14 02:20:48.786135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:08.459 [2024-05-14 02:20:48.786157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:1952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.459 [2024-05-14 02:20:48.786174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:08.459 [2024-05-14 02:20:48.786196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:1960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.459 [2024-05-14 02:20:48.786212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:08.459 [2024-05-14 02:20:48.786234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.459 [2024-05-14 02:20:48.786265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:08.459 [2024-05-14 02:20:48.786286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:2560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.459 [2024-05-14 02:20:48.786301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:08.460 [2024-05-14 02:20:48.786322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.460 [2024-05-14 02:20:48.786338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:08.460 [2024-05-14 02:20:48.786367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.460 [2024-05-14 02:20:48.786385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:08.460 [2024-05-14 02:20:48.786406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:2584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.460 [2024-05-14 02:20:48.786422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:08.460 [2024-05-14 02:20:48.786443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:2592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.460 [2024-05-14 02:20:48.786459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:08.460 [2024-05-14 02:20:48.786480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.460 [2024-05-14 02:20:48.786495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:08.460 [2024-05-14 02:20:48.786516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:2608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.460 [2024-05-14 02:20:48.786548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:08.460 [2024-05-14 02:20:48.786570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:2616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.460 [2024-05-14 02:20:48.786587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:08.460 [2024-05-14 02:20:48.786608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:2624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.460 [2024-05-14 02:20:48.786624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:08.460 [2024-05-14 02:20:48.786646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:2632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.460 [2024-05-14 02:20:48.786662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:08.460 [2024-05-14 02:20:48.786684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:2640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.460 [2024-05-14 02:20:48.786700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:08.460 [2024-05-14 02:20:48.786722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:2000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.460 [2024-05-14 02:20:48.786738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:08.460 [2024-05-14 02:20:48.786760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.460 [2024-05-14 02:20:48.786776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:08.460 [2024-05-14 02:20:48.786807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.460 [2024-05-14 02:20:48.786835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:08.460 [2024-05-14 02:20:48.786864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:2056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.460 [2024-05-14 02:20:48.786891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:08.460 [2024-05-14 02:20:48.786915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:2064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.460 [2024-05-14 02:20:48.786932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:08.460 [2024-05-14 02:20:48.786954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.460 [2024-05-14 02:20:48.786971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:08.460 [2024-05-14 02:20:48.786993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.460 [2024-05-14 02:20:48.787009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:08.460 [2024-05-14 02:20:48.787045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:2152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.460 [2024-05-14 02:20:48.787061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:08.460 [2024-05-14 02:20:48.787082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:2648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.460 [2024-05-14 02:20:48.787099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:08.460 [2024-05-14 02:20:48.787120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:2656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.460 [2024-05-14 02:20:48.787151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:08.460 [2024-05-14 02:20:48.787172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:2664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.460 [2024-05-14 02:20:48.787188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:08.460 [2024-05-14 02:20:48.787209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:2672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.460 [2024-05-14 02:20:48.787225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:08.460 [2024-05-14 02:20:48.787246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:2680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.460 [2024-05-14 02:20:48.787261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:08.460 [2024-05-14 02:20:48.787282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.460 [2024-05-14 02:20:48.787298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:08.460 [2024-05-14 02:20:48.787319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:2696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.460 [2024-05-14 02:20:48.787335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:08.460 [2024-05-14 02:20:48.787355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.460 [2024-05-14 02:20:48.787378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:08.460 [2024-05-14 02:20:48.787400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.460 [2024-05-14 02:20:48.787417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:08.460 [2024-05-14 02:20:48.787930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:2720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.460 [2024-05-14 02:20:48.787957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:08.460 [2024-05-14 02:20:48.787984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.460 [2024-05-14 02:20:48.788002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:08.460 [2024-05-14 02:20:48.788024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:2736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.460 [2024-05-14 02:20:48.788040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:08.460 [2024-05-14 02:20:48.788062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:2744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.460 [2024-05-14 02:20:48.788079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:08.460 [2024-05-14 02:20:48.788101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:2752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.460 [2024-05-14 02:20:48.788117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:08.460 [2024-05-14 02:20:48.788139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:2760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.460 [2024-05-14 02:20:48.788154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:08.460 [2024-05-14 02:20:48.788205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.460 [2024-05-14 02:20:48.788221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:08.460 [2024-05-14 02:20:48.788246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:2216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.460 [2024-05-14 02:20:48.788261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:08.460 [2024-05-14 02:20:48.788281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:2248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.460 [2024-05-14 02:20:48.788297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:08.460 [2024-05-14 02:20:48.788317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.460 [2024-05-14 02:20:48.788332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:08.460 [2024-05-14 02:20:48.788353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:2264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.460 [2024-05-14 02:20:48.788368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:08.460 [2024-05-14 02:20:48.788413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:2272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.460 [2024-05-14 02:20:48.788430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:08.460 [2024-05-14 02:20:48.788449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:2296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.460 [2024-05-14 02:20:48.788464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:08.460 [2024-05-14 02:20:48.788484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:2312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.461 [2024-05-14 02:20:48.788498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:08.461 [2024-05-14 02:20:48.788518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.461 [2024-05-14 02:20:48.788533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:08.461 [2024-05-14 02:20:48.788552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.461 [2024-05-14 02:20:48.788567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:08.461 [2024-05-14 02:20:48.788586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:2784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.461 [2024-05-14 02:20:48.788603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:08.461 [2024-05-14 02:20:48.788624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.461 [2024-05-14 02:20:48.788655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:08.461 [2024-05-14 02:20:48.788677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.461 [2024-05-14 02:20:48.788693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:08.461 [2024-05-14 02:20:48.788714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.461 [2024-05-14 02:20:48.788730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:08.461 [2024-05-14 02:20:48.788752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:2816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.461 [2024-05-14 02:20:48.788768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:08.461 [2024-05-14 02:20:48.788790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.461 [2024-05-14 02:20:48.788806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:08.461 [2024-05-14 02:20:48.788841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.461 [2024-05-14 02:20:48.788861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:08.461 [2024-05-14 02:20:48.788892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:2840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.461 [2024-05-14 02:20:48.788910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:08.461 [2024-05-14 02:20:48.788932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.461 [2024-05-14 02:20:48.788948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:08.461 [2024-05-14 02:20:48.788969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.461 [2024-05-14 02:20:48.788985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:08.461 [2024-05-14 02:20:48.789007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:2864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.461 [2024-05-14 02:20:48.789023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:08.461 [2024-05-14 02:20:48.789044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.461 [2024-05-14 02:20:48.789060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:08.461 [2024-05-14 02:20:48.789082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:2880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.461 [2024-05-14 02:20:48.789098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:08.461 [2024-05-14 02:20:48.789119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.461 [2024-05-14 02:20:48.789150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:08.461 [2024-05-14 02:20:48.789185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:2896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.461 [2024-05-14 02:20:48.789200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.461 [2024-05-14 02:20:48.789220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.461 [2024-05-14 02:20:48.789235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:08.461 [2024-05-14 02:20:48.789255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.461 [2024-05-14 02:20:48.789272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:08.461 [2024-05-14 02:20:48.789293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.461 [2024-05-14 02:20:48.789311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:08.461 [2024-05-14 02:20:48.789348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.461 [2024-05-14 02:20:48.789364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:08.461 [2024-05-14 02:20:48.789385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:2936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.461 [2024-05-14 02:20:48.789408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:08.461 [2024-05-14 02:20:48.789446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:2944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.461 [2024-05-14 02:20:48.789463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:08.461 [2024-05-14 02:20:48.789484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.461 [2024-05-14 02:20:48.789500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:08.461 [2024-05-14 02:20:48.789522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.461 [2024-05-14 02:20:48.789538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:08.461 [2024-05-14 02:20:48.789560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.461 [2024-05-14 02:20:48.789576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:08.461 [2024-05-14 02:20:48.789598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:2976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.461 [2024-05-14 02:20:48.789613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:08.461 [2024-05-14 02:20:48.789635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.461 [2024-05-14 02:20:48.789652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:08.461 [2024-05-14 02:20:48.789673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:2992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.461 [2024-05-14 02:20:48.789689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:08.461 [2024-05-14 02:20:48.789711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:3000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.461 [2024-05-14 02:20:48.789727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:08.461 [2024-05-14 02:20:48.789750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:3008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.461 [2024-05-14 02:20:48.789766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:08.461 [2024-05-14 02:20:48.789787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:3016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.461 [2024-05-14 02:20:48.789803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:08.461 [2024-05-14 02:20:48.789835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:2320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.461 [2024-05-14 02:20:48.789854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:08.461 [2024-05-14 02:20:48.789876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:2328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.461 [2024-05-14 02:20:48.789900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:08.461 [2024-05-14 02:20:48.789923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:2336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.461 [2024-05-14 02:20:48.802731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:08.461 [2024-05-14 02:20:48.802800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:2344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.461 [2024-05-14 02:20:48.802825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:08.461 [2024-05-14 02:20:48.802849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:2352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.461 [2024-05-14 02:20:48.802866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:08.461 [2024-05-14 02:20:48.802888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:2368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.461 [2024-05-14 02:20:48.802904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:08.461 [2024-05-14 02:20:48.802925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:2376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.461 [2024-05-14 02:20:48.802941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:08.461 [2024-05-14 02:20:48.802963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:2408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.462 [2024-05-14 02:20:48.802979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:08.462 [2024-05-14 02:20:48.803000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:3024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.462 [2024-05-14 02:20:48.803016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:08.462 [2024-05-14 02:20:48.803038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:3032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.462 [2024-05-14 02:20:48.803054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:08.462 [2024-05-14 02:20:48.803075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.462 [2024-05-14 02:20:48.803106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:08.462 [2024-05-14 02:20:48.803141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:3048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.462 [2024-05-14 02:20:48.803171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:08.462 [2024-05-14 02:20:48.803192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:3056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.462 [2024-05-14 02:20:48.803206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:08.462 [2024-05-14 02:20:48.803225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:3064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.462 [2024-05-14 02:20:48.803240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:08.462 [2024-05-14 02:20:48.803273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:3072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.462 [2024-05-14 02:20:48.803289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:08.462 [2024-05-14 02:20:48.803309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:3080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.462 [2024-05-14 02:20:48.803323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:08.462 [2024-05-14 02:20:48.803343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.462 [2024-05-14 02:20:48.803358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:08.462 [2024-05-14 02:20:48.803378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.462 [2024-05-14 02:20:48.803392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:08.462 [2024-05-14 02:20:48.803412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.462 [2024-05-14 02:20:48.803426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:08.462 [2024-05-14 02:20:48.803445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:2440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.462 [2024-05-14 02:20:48.803460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:08.462 [2024-05-14 02:20:48.803480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.462 [2024-05-14 02:20:48.803494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:08.462 [2024-05-14 02:20:48.803514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.462 [2024-05-14 02:20:48.803528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:08.462 [2024-05-14 02:20:48.803548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.462 [2024-05-14 02:20:48.803562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:08.462 [2024-05-14 02:20:48.803582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:2472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.462 [2024-05-14 02:20:48.803596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:08.462 [2024-05-14 02:20:48.803615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.462 [2024-05-14 02:20:48.803630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:08.462 [2024-05-14 02:20:48.803668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.462 [2024-05-14 02:20:48.803684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:08.462 [2024-05-14 02:20:48.803714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:3096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.462 [2024-05-14 02:20:48.803732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:08.462 [2024-05-14 02:20:48.803753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:3104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.462 [2024-05-14 02:20:48.803769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:08.462 [2024-05-14 02:20:48.803791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:3112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.462 [2024-05-14 02:20:48.803817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:08.462 [2024-05-14 02:20:48.803854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.462 [2024-05-14 02:20:48.803870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:08.462 [2024-05-14 02:20:48.803892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:3128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.462 [2024-05-14 02:20:48.803908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:08.462 [2024-05-14 02:20:48.803930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:3136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.462 [2024-05-14 02:20:48.803946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:08.462 [2024-05-14 02:20:48.803967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:3144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.462 [2024-05-14 02:20:48.803983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:08.462 [2024-05-14 02:20:48.804004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:3152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.462 [2024-05-14 02:20:48.804020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:08.462 [2024-05-14 02:20:48.804056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:3160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.462 [2024-05-14 02:20:48.804072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:08.462 [2024-05-14 02:20:48.805000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.462 [2024-05-14 02:20:48.805031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:08.462 [2024-05-14 02:20:48.805060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:3176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.462 [2024-05-14 02:20:48.805078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:08.462 [2024-05-14 02:20:48.805101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.462 [2024-05-14 02:20:48.805117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:08.462 [2024-05-14 02:20:48.805154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.462 [2024-05-14 02:20:48.805184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:08.462 [2024-05-14 02:20:48.805207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.462 [2024-05-14 02:20:48.805238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:08.462 [2024-05-14 02:20:48.805259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:2512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.462 [2024-05-14 02:20:48.805274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:08.462 [2024-05-14 02:20:48.805295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:2520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.462 [2024-05-14 02:20:48.805310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:08.462 [2024-05-14 02:20:48.805330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:2528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.462 [2024-05-14 02:20:48.805345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:08.462 [2024-05-14 02:20:48.805366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:2536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.462 [2024-05-14 02:20:48.805381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:08.462 [2024-05-14 02:20:48.805401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:2544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.462 [2024-05-14 02:20:48.805416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:08.462 [2024-05-14 02:20:48.805451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.462 [2024-05-14 02:20:48.805465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:08.462 [2024-05-14 02:20:48.805485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.462 [2024-05-14 02:20:48.805500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:08.462 [2024-05-14 02:20:48.805519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.462 [2024-05-14 02:20:48.805534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:08.463 [2024-05-14 02:20:48.805554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:1912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.463 [2024-05-14 02:20:48.805569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:08.463 [2024-05-14 02:20:48.805588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:1936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.463 [2024-05-14 02:20:48.805603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:08.463 [2024-05-14 02:20:48.805622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:1944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.463 [2024-05-14 02:20:48.805653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:08.463 [2024-05-14 02:20:48.805699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:1952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.463 [2024-05-14 02:20:48.805717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:08.463 [2024-05-14 02:20:48.805739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:1960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.463 [2024-05-14 02:20:48.805755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:08.463 [2024-05-14 02:20:48.805777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.463 [2024-05-14 02:20:48.805793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:08.463 [2024-05-14 02:20:48.805814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:2560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.463 [2024-05-14 02:20:48.805831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:08.463 [2024-05-14 02:20:48.805869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:2568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.463 [2024-05-14 02:20:48.805887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:08.463 [2024-05-14 02:20:48.805909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.463 [2024-05-14 02:20:48.805926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:08.463 [2024-05-14 02:20:48.805971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.463 [2024-05-14 02:20:48.805990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:08.463 [2024-05-14 02:20:48.806013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:2592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.463 [2024-05-14 02:20:48.806029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:08.463 [2024-05-14 02:20:48.806051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:2600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.463 [2024-05-14 02:20:48.806067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:08.463 [2024-05-14 02:20:48.806089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:2608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.463 [2024-05-14 02:20:48.806105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:08.463 [2024-05-14 02:20:48.806126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.463 [2024-05-14 02:20:48.806142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:08.463 [2024-05-14 02:20:48.806164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:2624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.463 [2024-05-14 02:20:48.806180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:08.463 [2024-05-14 02:20:48.806211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.463 [2024-05-14 02:20:48.806228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:08.463 [2024-05-14 02:20:48.806279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:2640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.463 [2024-05-14 02:20:48.806309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:08.463 [2024-05-14 02:20:48.806328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:2000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.463 [2024-05-14 02:20:48.806343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:08.463 [2024-05-14 02:20:48.806363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:2024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.463 [2024-05-14 02:20:48.806378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:08.463 [2024-05-14 02:20:48.806397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.463 [2024-05-14 02:20:48.806412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:08.463 [2024-05-14 02:20:48.806432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.463 [2024-05-14 02:20:48.806447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:08.463 [2024-05-14 02:20:48.806466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.463 [2024-05-14 02:20:48.806499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:08.463 [2024-05-14 02:20:48.806528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:2112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.463 [2024-05-14 02:20:48.806549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:08.463 [2024-05-14 02:20:48.806578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.463 [2024-05-14 02:20:48.806598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:08.463 [2024-05-14 02:20:48.806627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.463 [2024-05-14 02:20:48.806660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:08.463 [2024-05-14 02:20:48.806699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.463 [2024-05-14 02:20:48.806720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:08.463 [2024-05-14 02:20:48.806749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:2656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.463 [2024-05-14 02:20:48.806770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:08.463 [2024-05-14 02:20:48.806798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.463 [2024-05-14 02:20:48.806845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:08.463 [2024-05-14 02:20:48.806878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.463 [2024-05-14 02:20:48.806900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:08.463 [2024-05-14 02:20:48.806929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.463 [2024-05-14 02:20:48.806950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:08.463 [2024-05-14 02:20:48.806979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.463 [2024-05-14 02:20:48.807000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:08.463 [2024-05-14 02:20:48.807037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:2696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.463 [2024-05-14 02:20:48.807058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:08.463 [2024-05-14 02:20:48.807094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:2704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.463 [2024-05-14 02:20:48.807125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:08.463 [2024-05-14 02:20:48.807805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.463 [2024-05-14 02:20:48.807841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:08.463 [2024-05-14 02:20:48.807877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.464 [2024-05-14 02:20:48.807899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:08.464 [2024-05-14 02:20:48.807928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:2728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.464 [2024-05-14 02:20:48.807954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:08.464 [2024-05-14 02:20:48.807984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:2736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.464 [2024-05-14 02:20:48.808005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:08.464 [2024-05-14 02:20:48.808055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:2744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.464 [2024-05-14 02:20:48.808076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:08.464 [2024-05-14 02:20:48.808104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:2752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.464 [2024-05-14 02:20:48.808135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:08.464 [2024-05-14 02:20:48.808164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:2760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.464 [2024-05-14 02:20:48.808199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:08.464 [2024-05-14 02:20:48.808229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:2192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.464 [2024-05-14 02:20:48.808251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:08.464 [2024-05-14 02:20:48.808280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:2216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.464 [2024-05-14 02:20:48.808301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:08.464 [2024-05-14 02:20:48.808330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:2248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.464 [2024-05-14 02:20:48.808351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:08.464 [2024-05-14 02:20:48.808380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:2256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.464 [2024-05-14 02:20:48.808401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:08.464 [2024-05-14 02:20:48.808430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:2264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.464 [2024-05-14 02:20:48.808451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:08.464 [2024-05-14 02:20:48.808480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.464 [2024-05-14 02:20:48.808501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:08.464 [2024-05-14 02:20:48.808530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.464 [2024-05-14 02:20:48.808552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:08.464 [2024-05-14 02:20:48.808580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.464 [2024-05-14 02:20:48.808602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:08.464 [2024-05-14 02:20:48.808630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:2768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.464 [2024-05-14 02:20:48.808656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:08.464 [2024-05-14 02:20:48.808697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.464 [2024-05-14 02:20:48.808730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:08.464 [2024-05-14 02:20:48.808759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.464 [2024-05-14 02:20:48.808808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:08.464 [2024-05-14 02:20:48.808839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.464 [2024-05-14 02:20:48.808863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:08.464 [2024-05-14 02:20:48.808904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.464 [2024-05-14 02:20:48.808927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:08.464 [2024-05-14 02:20:48.808955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.464 [2024-05-14 02:20:48.808976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:08.464 [2024-05-14 02:20:48.809005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:2816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.464 [2024-05-14 02:20:48.809026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:08.464 [2024-05-14 02:20:48.809065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:2824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.464 [2024-05-14 02:20:48.809086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:08.464 [2024-05-14 02:20:48.809115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.464 [2024-05-14 02:20:48.809146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:08.464 [2024-05-14 02:20:48.809186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:2840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.464 [2024-05-14 02:20:48.809207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:08.464 [2024-05-14 02:20:48.809236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:2848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.464 [2024-05-14 02:20:48.809257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:08.464 [2024-05-14 02:20:48.809286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:2856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.464 [2024-05-14 02:20:48.809307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:08.464 [2024-05-14 02:20:48.809335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:2864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.464 [2024-05-14 02:20:48.809357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:08.464 [2024-05-14 02:20:48.809385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:2872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.464 [2024-05-14 02:20:48.809406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:08.464 [2024-05-14 02:20:48.809435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:2880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.464 [2024-05-14 02:20:48.809457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:08.464 [2024-05-14 02:20:48.809486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.464 [2024-05-14 02:20:48.809507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:08.464 [2024-05-14 02:20:48.809546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:2896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.464 [2024-05-14 02:20:48.809568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.464 [2024-05-14 02:20:48.809597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:2904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.464 [2024-05-14 02:20:48.809618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:08.464 [2024-05-14 02:20:48.809651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.464 [2024-05-14 02:20:48.809683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:08.464 [2024-05-14 02:20:48.809712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.464 [2024-05-14 02:20:48.809734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:08.464 [2024-05-14 02:20:48.809778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.464 [2024-05-14 02:20:48.809804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:08.464 [2024-05-14 02:20:48.809844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:2936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.464 [2024-05-14 02:20:48.809865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:08.464 [2024-05-14 02:20:48.809894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:2944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.464 [2024-05-14 02:20:48.809915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:08.464 [2024-05-14 02:20:48.809944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.464 [2024-05-14 02:20:48.809980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:08.464 [2024-05-14 02:20:48.810011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.464 [2024-05-14 02:20:48.810032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:08.464 [2024-05-14 02:20:48.810062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:2968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.464 [2024-05-14 02:20:48.810083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:08.464 [2024-05-14 02:20:48.810112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:2976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.464 [2024-05-14 02:20:48.810133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:08.465 [2024-05-14 02:20:48.810163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.465 [2024-05-14 02:20:48.810187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:08.465 [2024-05-14 02:20:48.810217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.465 [2024-05-14 02:20:48.810250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:08.465 [2024-05-14 02:20:48.810291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:3000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.465 [2024-05-14 02:20:48.810324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:08.465 [2024-05-14 02:20:48.810353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:3008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.465 [2024-05-14 02:20:48.810374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:08.465 [2024-05-14 02:20:48.810403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:3016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.465 [2024-05-14 02:20:48.810424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:08.465 [2024-05-14 02:20:48.810453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.465 [2024-05-14 02:20:48.810474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:08.465 [2024-05-14 02:20:48.810503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.465 [2024-05-14 02:20:48.810535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:08.465 [2024-05-14 02:20:48.810563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.465 [2024-05-14 02:20:48.810585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:08.465 [2024-05-14 02:20:48.810613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:2344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.465 [2024-05-14 02:20:48.810635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:08.465 [2024-05-14 02:20:48.810670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:2352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.465 [2024-05-14 02:20:48.810691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:08.465 [2024-05-14 02:20:48.810721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:2368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.465 [2024-05-14 02:20:48.810742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:08.465 [2024-05-14 02:20:48.810794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:2376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.465 [2024-05-14 02:20:48.810819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:08.465 [2024-05-14 02:20:48.810848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:2408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.465 [2024-05-14 02:20:48.810870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:08.465 [2024-05-14 02:20:48.810899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:3024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.465 [2024-05-14 02:20:48.810930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:08.465 [2024-05-14 02:20:48.810960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:3032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.465 [2024-05-14 02:20:48.810991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:08.465 [2024-05-14 02:20:48.811028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.465 [2024-05-14 02:20:48.811049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:08.465 [2024-05-14 02:20:48.811078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:3048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.465 [2024-05-14 02:20:48.811102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:08.465 [2024-05-14 02:20:48.811132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:3056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.465 [2024-05-14 02:20:48.811163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:08.465 [2024-05-14 02:20:48.811192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:3064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.465 [2024-05-14 02:20:48.811213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:08.465 [2024-05-14 02:20:48.811242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:3072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.465 [2024-05-14 02:20:48.811263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:08.465 [2024-05-14 02:20:48.811292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:3080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.465 [2024-05-14 02:20:48.811313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:08.465 [2024-05-14 02:20:48.811342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.465 [2024-05-14 02:20:48.811364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:08.465 [2024-05-14 02:20:48.811392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:2424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.465 [2024-05-14 02:20:48.811413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:08.465 [2024-05-14 02:20:48.811442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.465 [2024-05-14 02:20:48.811463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:08.465 [2024-05-14 02:20:48.811492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.465 [2024-05-14 02:20:48.811513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:08.465 [2024-05-14 02:20:48.811542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.465 [2024-05-14 02:20:48.811564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:08.465 [2024-05-14 02:20:48.811603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:2456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.465 [2024-05-14 02:20:48.811625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:08.465 [2024-05-14 02:20:48.811663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.465 [2024-05-14 02:20:48.811695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:08.465 [2024-05-14 02:20:48.811723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.465 [2024-05-14 02:20:48.811745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:08.465 [2024-05-14 02:20:48.811798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.465 [2024-05-14 02:20:48.811823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:08.465 [2024-05-14 02:20:48.811852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:3088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.465 [2024-05-14 02:20:48.811873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:08.465 [2024-05-14 02:20:48.811902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:3096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.465 [2024-05-14 02:20:48.811923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:08.465 [2024-05-14 02:20:48.811952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:3104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.465 [2024-05-14 02:20:48.811974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:08.465 [2024-05-14 02:20:48.812002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:3112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.465 [2024-05-14 02:20:48.812035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:08.465 [2024-05-14 02:20:48.812063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:3120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.465 [2024-05-14 02:20:48.812084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:08.465 [2024-05-14 02:20:48.812113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:3128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.465 [2024-05-14 02:20:48.812145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:08.465 [2024-05-14 02:20:48.812179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:3136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.465 [2024-05-14 02:20:48.812200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:08.465 [2024-05-14 02:20:48.812239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:3144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.465 [2024-05-14 02:20:48.812260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:08.465 [2024-05-14 02:20:48.812299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:3152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.465 [2024-05-14 02:20:48.812322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:08.465 [2024-05-14 02:20:48.813499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:3160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.465 [2024-05-14 02:20:48.813536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:08.466 [2024-05-14 02:20:48.813584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:3168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.466 [2024-05-14 02:20:48.813607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:08.466 [2024-05-14 02:20:48.813636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.466 [2024-05-14 02:20:48.813657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:08.466 [2024-05-14 02:20:48.813687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.466 [2024-05-14 02:20:48.813708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:08.466 [2024-05-14 02:20:48.813737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.466 [2024-05-14 02:20:48.813758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:08.466 [2024-05-14 02:20:48.813808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.466 [2024-05-14 02:20:48.813831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:08.466 [2024-05-14 02:20:48.813860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:2512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.466 [2024-05-14 02:20:48.813881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:08.466 [2024-05-14 02:20:48.813920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.466 [2024-05-14 02:20:48.813941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:08.466 [2024-05-14 02:20:48.813984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.466 [2024-05-14 02:20:48.814008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:08.466 [2024-05-14 02:20:48.814037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.466 [2024-05-14 02:20:48.814058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:08.466 [2024-05-14 02:20:48.814087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:2544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.466 [2024-05-14 02:20:48.814109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:08.466 [2024-05-14 02:20:48.814137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.466 [2024-05-14 02:20:48.814176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:08.466 [2024-05-14 02:20:48.814206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.466 [2024-05-14 02:20:48.814229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:08.466 [2024-05-14 02:20:48.814258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:1896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.466 [2024-05-14 02:20:48.814279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:08.466 [2024-05-14 02:20:48.814318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.466 [2024-05-14 02:20:48.814339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:08.466 [2024-05-14 02:20:48.814368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:1936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.466 [2024-05-14 02:20:48.814389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:08.466 [2024-05-14 02:20:48.814438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:1944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.466 [2024-05-14 02:20:48.814459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:08.466 [2024-05-14 02:20:48.814487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.466 [2024-05-14 02:20:48.814512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:08.466 [2024-05-14 02:20:48.814551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.466 [2024-05-14 02:20:48.814573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:08.466 [2024-05-14 02:20:48.814602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.466 [2024-05-14 02:20:48.814623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:08.466 [2024-05-14 02:20:48.814663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:2560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.466 [2024-05-14 02:20:48.814685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:08.466 [2024-05-14 02:20:48.814713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:2568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.466 [2024-05-14 02:20:48.814734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:08.466 [2024-05-14 02:20:48.814777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:2576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.466 [2024-05-14 02:20:48.814802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:08.466 [2024-05-14 02:20:48.814832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.466 [2024-05-14 02:20:48.814864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:08.466 [2024-05-14 02:20:48.814895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:2592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.466 [2024-05-14 02:20:48.814917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:08.466 [2024-05-14 02:20:48.814955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:2600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.466 [2024-05-14 02:20:48.814987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:08.466 [2024-05-14 02:20:48.815025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:2608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.466 [2024-05-14 02:20:48.815045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:08.466 [2024-05-14 02:20:48.815074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:2616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.466 [2024-05-14 02:20:48.815095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:08.466 [2024-05-14 02:20:48.815124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:2624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.466 [2024-05-14 02:20:48.815145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:08.466 [2024-05-14 02:20:48.815174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:2632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.466 [2024-05-14 02:20:48.815195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:08.466 [2024-05-14 02:20:48.815224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.466 [2024-05-14 02:20:48.815245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:08.466 [2024-05-14 02:20:48.815274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.466 [2024-05-14 02:20:48.815295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:08.466 [2024-05-14 02:20:48.815324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:2024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.466 [2024-05-14 02:20:48.815345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:08.466 [2024-05-14 02:20:48.815374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:2032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.466 [2024-05-14 02:20:48.815399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:08.466 [2024-05-14 02:20:48.815429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.466 [2024-05-14 02:20:48.815451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:08.466 [2024-05-14 02:20:48.815480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.466 [2024-05-14 02:20:48.815501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:08.466 [2024-05-14 02:20:48.815540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:2112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.466 [2024-05-14 02:20:48.815563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:08.466 [2024-05-14 02:20:48.815592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:2144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.466 [2024-05-14 02:20:48.815614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:08.466 [2024-05-14 02:20:48.815643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:2152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.466 [2024-05-14 02:20:48.815674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:08.466 [2024-05-14 02:20:48.815704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:2648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.466 [2024-05-14 02:20:48.815725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:08.466 [2024-05-14 02:20:48.815754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:2656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.466 [2024-05-14 02:20:48.815790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:08.466 [2024-05-14 02:20:48.815821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.467 [2024-05-14 02:20:48.815842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:08.467 [2024-05-14 02:20:48.815871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.467 [2024-05-14 02:20:48.815903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:08.467 [2024-05-14 02:20:48.815932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:2680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.467 [2024-05-14 02:20:48.815962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:08.467 [2024-05-14 02:20:48.815991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:2688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.467 [2024-05-14 02:20:48.816013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:08.467 [2024-05-14 02:20:48.816051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.467 [2024-05-14 02:20:48.816083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:08.467 [2024-05-14 02:20:48.816390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:2704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.467 [2024-05-14 02:20:48.816427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:08.467 [2024-05-14 02:20:48.816488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.467 [2024-05-14 02:20:48.816514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:08.467 [2024-05-14 02:20:48.816565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:2720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.467 [2024-05-14 02:20:48.816611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:08.467 [2024-05-14 02:20:48.816658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:2728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.467 [2024-05-14 02:20:48.816681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:08.467 [2024-05-14 02:20:48.816715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:2736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.467 [2024-05-14 02:20:48.816736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:08.467 [2024-05-14 02:20:48.816798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:2744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.467 [2024-05-14 02:20:48.816824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:08.467 [2024-05-14 02:20:48.816858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.467 [2024-05-14 02:20:48.816880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:08.467 [2024-05-14 02:20:48.816914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:2760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.467 [2024-05-14 02:20:48.816935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:08.467 [2024-05-14 02:20:48.816969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:2192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.467 [2024-05-14 02:20:48.816990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:08.467 [2024-05-14 02:20:48.817024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.467 [2024-05-14 02:20:48.817045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:08.467 [2024-05-14 02:20:48.817079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:2248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.467 [2024-05-14 02:20:48.817100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:08.467 [2024-05-14 02:20:48.817134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:2256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.467 [2024-05-14 02:20:48.817156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:08.467 [2024-05-14 02:20:48.817189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:2264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.467 [2024-05-14 02:20:48.817221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:08.467 [2024-05-14 02:20:48.817254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:2272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.467 [2024-05-14 02:20:48.817276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:08.467 [2024-05-14 02:20:48.817323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.467 [2024-05-14 02:20:48.817359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:08.467 [2024-05-14 02:20:48.817394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.467 [2024-05-14 02:20:48.817416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:08.467 [2024-05-14 02:20:48.817450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:2768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.467 [2024-05-14 02:20:48.817471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:08.467 [2024-05-14 02:20:48.817505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.467 [2024-05-14 02:20:48.817536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:08.467 [2024-05-14 02:20:48.817573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:2784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.467 [2024-05-14 02:20:48.817595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:08.467 [2024-05-14 02:20:48.817629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.467 [2024-05-14 02:20:48.817654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:08.467 [2024-05-14 02:20:48.817688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:2800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.467 [2024-05-14 02:20:48.817709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:08.467 [2024-05-14 02:20:48.817752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.467 [2024-05-14 02:20:48.817791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:08.467 [2024-05-14 02:20:48.817827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:2816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.467 [2024-05-14 02:20:48.817849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:08.467 [2024-05-14 02:20:48.817883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.467 [2024-05-14 02:20:48.817904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:08.467 [2024-05-14 02:20:48.817938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.467 [2024-05-14 02:20:48.817987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:08.467 [2024-05-14 02:20:48.818023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:2840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.467 [2024-05-14 02:20:48.818045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:08.467 [2024-05-14 02:20:48.818079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:2848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.467 [2024-05-14 02:20:48.818100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:08.467 [2024-05-14 02:20:48.818145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.467 [2024-05-14 02:20:48.818168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:08.467 [2024-05-14 02:20:48.818201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:2864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.467 [2024-05-14 02:20:48.818222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:08.467 [2024-05-14 02:20:48.818256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.467 [2024-05-14 02:20:48.818277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:08.467 [2024-05-14 02:20:48.818311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.468 [2024-05-14 02:20:48.818332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:08.468 [2024-05-14 02:20:48.818365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.468 [2024-05-14 02:20:48.818386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:08.468 [2024-05-14 02:20:48.818420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.468 [2024-05-14 02:20:48.818442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.468 [2024-05-14 02:20:48.818484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.468 [2024-05-14 02:20:48.818515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:08.468 [2024-05-14 02:20:48.827995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.468 [2024-05-14 02:20:48.828049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:08.468 [2024-05-14 02:20:48.828079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:2920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.468 [2024-05-14 02:20:48.828096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:08.468 [2024-05-14 02:20:48.828120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.468 [2024-05-14 02:20:48.828135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:08.468 [2024-05-14 02:20:48.828159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.468 [2024-05-14 02:20:48.828174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:08.468 [2024-05-14 02:20:48.828198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:2944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.468 [2024-05-14 02:20:48.828213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:08.468 [2024-05-14 02:20:48.828257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.468 [2024-05-14 02:20:48.828274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:08.468 [2024-05-14 02:20:48.828298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:2960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.468 [2024-05-14 02:20:48.828313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:08.468 [2024-05-14 02:20:48.828337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.468 [2024-05-14 02:20:48.828352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:08.468 [2024-05-14 02:20:48.828375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:2976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.468 [2024-05-14 02:20:48.828390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:08.468 [2024-05-14 02:20:48.828413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.468 [2024-05-14 02:20:48.828428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:08.468 [2024-05-14 02:20:48.828452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:2992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.468 [2024-05-14 02:20:48.828467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:08.468 [2024-05-14 02:20:48.828490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:3000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.468 [2024-05-14 02:20:48.828505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:08.468 [2024-05-14 02:20:48.828528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:3008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.468 [2024-05-14 02:20:48.828543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:08.468 [2024-05-14 02:20:48.828566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:3016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.468 [2024-05-14 02:20:48.828580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:08.468 [2024-05-14 02:20:48.828603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:2320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.468 [2024-05-14 02:20:48.828619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:08.468 [2024-05-14 02:20:48.828642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:2328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.468 [2024-05-14 02:20:48.828657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:08.468 [2024-05-14 02:20:48.828681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:2336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.468 [2024-05-14 02:20:48.828696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:08.468 [2024-05-14 02:20:48.828719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:2344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.468 [2024-05-14 02:20:48.828742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:08.468 [2024-05-14 02:20:48.828797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:2352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.468 [2024-05-14 02:20:48.828816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:08.468 [2024-05-14 02:20:48.828842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:2368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.468 [2024-05-14 02:20:48.828858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:08.468 [2024-05-14 02:20:48.828882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:2376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.468 [2024-05-14 02:20:48.828897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:08.468 [2024-05-14 02:20:48.828921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.468 [2024-05-14 02:20:48.828936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:08.468 [2024-05-14 02:20:48.828961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.468 [2024-05-14 02:20:48.828976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:08.468 [2024-05-14 02:20:48.829000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:3032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.468 [2024-05-14 02:20:48.829015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:08.468 [2024-05-14 02:20:48.829039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:3040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.468 [2024-05-14 02:20:48.829055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:08.468 [2024-05-14 02:20:48.829078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:3048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.468 [2024-05-14 02:20:48.829094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:08.468 [2024-05-14 02:20:48.829117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:3056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.468 [2024-05-14 02:20:48.829133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:08.468 [2024-05-14 02:20:48.829156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.468 [2024-05-14 02:20:48.829187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:08.468 [2024-05-14 02:20:48.829210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:3072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.468 [2024-05-14 02:20:48.829225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:08.468 [2024-05-14 02:20:48.829248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:3080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.468 [2024-05-14 02:20:48.829271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:08.468 [2024-05-14 02:20:48.829296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.468 [2024-05-14 02:20:48.829312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:08.468 [2024-05-14 02:20:48.829335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.468 [2024-05-14 02:20:48.829350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:08.468 [2024-05-14 02:20:48.829374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.468 [2024-05-14 02:20:48.829389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:08.468 [2024-05-14 02:20:48.829412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.468 [2024-05-14 02:20:48.829428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:08.468 [2024-05-14 02:20:48.829452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.468 [2024-05-14 02:20:48.829467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:08.468 [2024-05-14 02:20:48.829491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:2456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.468 [2024-05-14 02:20:48.829506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:08.468 [2024-05-14 02:20:48.829528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.469 [2024-05-14 02:20:48.829543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:08.469 [2024-05-14 02:20:48.829566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:2472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.469 [2024-05-14 02:20:48.829581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:08.469 [2024-05-14 02:20:48.829604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.469 [2024-05-14 02:20:48.829619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:08.469 [2024-05-14 02:20:48.829657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:3088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.469 [2024-05-14 02:20:48.829673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:08.469 [2024-05-14 02:20:48.829696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:3096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.469 [2024-05-14 02:20:48.829712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:08.469 [2024-05-14 02:20:48.829736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:3104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.469 [2024-05-14 02:20:48.829751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:08.469 [2024-05-14 02:20:48.829783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:3112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.469 [2024-05-14 02:20:48.829800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:08.469 [2024-05-14 02:20:48.829863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.469 [2024-05-14 02:20:48.829881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:08.469 [2024-05-14 02:20:48.829906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:3128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.469 [2024-05-14 02:20:48.829921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:08.469 [2024-05-14 02:20:48.829975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:3136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.469 [2024-05-14 02:20:48.829996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:08.469 [2024-05-14 02:20:48.830023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:3144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.469 [2024-05-14 02:20:48.830040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:08.469 [2024-05-14 02:20:48.830492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.469 [2024-05-14 02:20:48.830531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:08.469 [2024-05-14 02:21:02.082330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:121584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.469 [2024-05-14 02:21:02.082379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.469 [2024-05-14 02:21:02.082407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:121600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.469 [2024-05-14 02:21:02.082423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.469 [2024-05-14 02:21:02.082439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:121624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.469 [2024-05-14 02:21:02.082453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.469 [2024-05-14 02:21:02.082469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:121648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.469 [2024-05-14 02:21:02.082483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.469 [2024-05-14 02:21:02.082498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:121656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.469 [2024-05-14 02:21:02.082512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.469 [2024-05-14 02:21:02.082527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:121680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.469 [2024-05-14 02:21:02.082541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.469 [2024-05-14 02:21:02.082557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:121688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.469 [2024-05-14 02:21:02.082589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.469 [2024-05-14 02:21:02.082616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:121696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.469 [2024-05-14 02:21:02.082643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.469 [2024-05-14 02:21:02.082661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:122256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.469 [2024-05-14 02:21:02.082675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.469 [2024-05-14 02:21:02.082690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:122280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.469 [2024-05-14 02:21:02.082704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.469 [2024-05-14 02:21:02.082719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:122304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.469 [2024-05-14 02:21:02.082732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.469 [2024-05-14 02:21:02.082747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:121704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.469 [2024-05-14 02:21:02.082761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.469 [2024-05-14 02:21:02.082791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:121712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.469 [2024-05-14 02:21:02.082806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.469 [2024-05-14 02:21:02.082821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:121720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.469 [2024-05-14 02:21:02.082835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.469 [2024-05-14 02:21:02.082850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:121728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.469 [2024-05-14 02:21:02.082863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.469 [2024-05-14 02:21:02.082878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:121736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.469 [2024-05-14 02:21:02.082892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.469 [2024-05-14 02:21:02.082907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:121744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.469 [2024-05-14 02:21:02.082923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.469 [2024-05-14 02:21:02.082938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:121752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.469 [2024-05-14 02:21:02.082952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.469 [2024-05-14 02:21:02.082967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:121768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.469 [2024-05-14 02:21:02.082981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.469 [2024-05-14 02:21:02.083006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:122360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.469 [2024-05-14 02:21:02.083021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.469 [2024-05-14 02:21:02.083037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:122368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.469 [2024-05-14 02:21:02.083050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.469 [2024-05-14 02:21:02.083066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:122376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.469 [2024-05-14 02:21:02.083080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.469 [2024-05-14 02:21:02.083096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:122384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.469 [2024-05-14 02:21:02.083110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.469 [2024-05-14 02:21:02.083125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:122392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.469 [2024-05-14 02:21:02.083138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.469 [2024-05-14 02:21:02.083154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:122400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.469 [2024-05-14 02:21:02.083168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.469 [2024-05-14 02:21:02.083183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:122408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.469 [2024-05-14 02:21:02.083197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.469 [2024-05-14 02:21:02.083212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:122416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.469 [2024-05-14 02:21:02.083226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.469 [2024-05-14 02:21:02.083241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:122424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.469 [2024-05-14 02:21:02.083255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.469 [2024-05-14 02:21:02.083270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:122432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.469 [2024-05-14 02:21:02.083284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.469 [2024-05-14 02:21:02.083299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:122440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.470 [2024-05-14 02:21:02.083314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.470 [2024-05-14 02:21:02.083329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:122448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.470 [2024-05-14 02:21:02.083343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.470 [2024-05-14 02:21:02.083358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:122456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.470 [2024-05-14 02:21:02.083378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.470 [2024-05-14 02:21:02.083395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:122464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.470 [2024-05-14 02:21:02.083409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.470 [2024-05-14 02:21:02.083425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:122472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.470 [2024-05-14 02:21:02.083438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.470 [2024-05-14 02:21:02.083453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:122480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.470 [2024-05-14 02:21:02.083467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.470 [2024-05-14 02:21:02.083483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:122488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.470 [2024-05-14 02:21:02.083497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.470 [2024-05-14 02:21:02.083512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:122496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.470 [2024-05-14 02:21:02.083526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.470 [2024-05-14 02:21:02.083542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:122504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.470 [2024-05-14 02:21:02.083556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.470 [2024-05-14 02:21:02.083571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:122512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.470 [2024-05-14 02:21:02.083586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.470 [2024-05-14 02:21:02.083601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:122520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.470 [2024-05-14 02:21:02.083615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.470 [2024-05-14 02:21:02.083630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:122528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.470 [2024-05-14 02:21:02.083644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.470 [2024-05-14 02:21:02.083659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:122536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.470 [2024-05-14 02:21:02.083673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.470 [2024-05-14 02:21:02.083688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:122544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.470 [2024-05-14 02:21:02.083702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.470 [2024-05-14 02:21:02.083718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:121784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.470 [2024-05-14 02:21:02.083732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.470 [2024-05-14 02:21:02.083753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:121816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.470 [2024-05-14 02:21:02.083779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.470 [2024-05-14 02:21:02.083797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:121840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.470 [2024-05-14 02:21:02.083811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.470 [2024-05-14 02:21:02.083826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:121872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.470 [2024-05-14 02:21:02.083840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.470 [2024-05-14 02:21:02.083856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:121880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.470 [2024-05-14 02:21:02.083870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.470 [2024-05-14 02:21:02.083885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:121888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.470 [2024-05-14 02:21:02.083900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.470 [2024-05-14 02:21:02.083916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:121896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.470 [2024-05-14 02:21:02.083929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.470 [2024-05-14 02:21:02.083945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:121912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.470 [2024-05-14 02:21:02.083959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.470 [2024-05-14 02:21:02.083974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:121920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.470 [2024-05-14 02:21:02.083988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.470 [2024-05-14 02:21:02.084003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:121936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.470 [2024-05-14 02:21:02.084017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.470 [2024-05-14 02:21:02.084033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:121944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.470 [2024-05-14 02:21:02.084047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.470 [2024-05-14 02:21:02.084062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:121960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.470 [2024-05-14 02:21:02.084076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.470 [2024-05-14 02:21:02.084091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:121984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.470 [2024-05-14 02:21:02.084105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.470 [2024-05-14 02:21:02.084121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:121992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.470 [2024-05-14 02:21:02.084141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.470 [2024-05-14 02:21:02.084156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:122032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.470 [2024-05-14 02:21:02.084170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.470 [2024-05-14 02:21:02.084186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:122040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.470 [2024-05-14 02:21:02.084200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.470 [2024-05-14 02:21:02.084215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:122552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.470 [2024-05-14 02:21:02.084229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.470 [2024-05-14 02:21:02.084244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:122560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.470 [2024-05-14 02:21:02.084258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.470 [2024-05-14 02:21:02.084273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:122568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.470 [2024-05-14 02:21:02.084287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.470 [2024-05-14 02:21:02.084303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:122576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.470 [2024-05-14 02:21:02.084317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.470 [2024-05-14 02:21:02.084333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:122584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.470 [2024-05-14 02:21:02.084347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.470 [2024-05-14 02:21:02.084362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.470 [2024-05-14 02:21:02.084377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.470 [2024-05-14 02:21:02.084392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:122600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.470 [2024-05-14 02:21:02.084407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.470 [2024-05-14 02:21:02.084422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:122608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.470 [2024-05-14 02:21:02.084436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.470 [2024-05-14 02:21:02.084451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:122616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.470 [2024-05-14 02:21:02.084464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.470 [2024-05-14 02:21:02.084480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:122624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.470 [2024-05-14 02:21:02.084494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.470 [2024-05-14 02:21:02.084515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:122632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.470 [2024-05-14 02:21:02.084529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.470 [2024-05-14 02:21:02.084545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:122640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.471 [2024-05-14 02:21:02.084558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.471 [2024-05-14 02:21:02.084574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:122648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.471 [2024-05-14 02:21:02.084588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.471 [2024-05-14 02:21:02.084603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:122656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.471 [2024-05-14 02:21:02.084617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.471 [2024-05-14 02:21:02.084632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:122664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.471 [2024-05-14 02:21:02.084646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.471 [2024-05-14 02:21:02.084662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:122672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.471 [2024-05-14 02:21:02.084675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.471 [2024-05-14 02:21:02.084691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:122680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.471 [2024-05-14 02:21:02.084705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.471 [2024-05-14 02:21:02.084721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:122048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.471 [2024-05-14 02:21:02.084734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.471 [2024-05-14 02:21:02.084750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:122056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.471 [2024-05-14 02:21:02.084773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.471 [2024-05-14 02:21:02.084791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:122088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.471 [2024-05-14 02:21:02.084805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.471 [2024-05-14 02:21:02.084820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:122096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.471 [2024-05-14 02:21:02.084834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.471 [2024-05-14 02:21:02.084849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:122104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.471 [2024-05-14 02:21:02.084864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.471 [2024-05-14 02:21:02.084879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:122128 len:8 SGL TRANSP 02:21:22 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:08.471 ORT DATA BLOCK TRANSPORT 0x0 00:24:08.471 [2024-05-14 02:21:02.084899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.471 [2024-05-14 02:21:02.084916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:122168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.471 [2024-05-14 02:21:02.084934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.471 [2024-05-14 02:21:02.084950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:122200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.471 [2024-05-14 02:21:02.084964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.471 [2024-05-14 02:21:02.084980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:122208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.471 [2024-05-14 02:21:02.084994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.471 [2024-05-14 02:21:02.085009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:122216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.471 [2024-05-14 02:21:02.085023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.471 [2024-05-14 02:21:02.085038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:122224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.471 [2024-05-14 02:21:02.085052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.471 [2024-05-14 02:21:02.085068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:122232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.471 [2024-05-14 02:21:02.085081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.471 [2024-05-14 02:21:02.085097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:122240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.471 [2024-05-14 02:21:02.085119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.471 [2024-05-14 02:21:02.085134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:122248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.471 [2024-05-14 02:21:02.085148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.471 [2024-05-14 02:21:02.085164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:122264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.471 [2024-05-14 02:21:02.085179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.471 [2024-05-14 02:21:02.085194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:122272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.471 [2024-05-14 02:21:02.085208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.471 [2024-05-14 02:21:02.085223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:122688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.471 [2024-05-14 02:21:02.085237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.471 [2024-05-14 02:21:02.085252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:122696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.471 [2024-05-14 02:21:02.085266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.471 [2024-05-14 02:21:02.085287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:122704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.471 [2024-05-14 02:21:02.085303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.471 [2024-05-14 02:21:02.085318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:122712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.471 [2024-05-14 02:21:02.085332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.471 [2024-05-14 02:21:02.085347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:122720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.471 [2024-05-14 02:21:02.085361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.471 [2024-05-14 02:21:02.085376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:122728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.471 [2024-05-14 02:21:02.085390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.471 [2024-05-14 02:21:02.085406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:122736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.471 [2024-05-14 02:21:02.085422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.471 [2024-05-14 02:21:02.085438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:122744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.471 [2024-05-14 02:21:02.085452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.471 [2024-05-14 02:21:02.085467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:122752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.471 [2024-05-14 02:21:02.085481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.471 [2024-05-14 02:21:02.085496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:122760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.471 [2024-05-14 02:21:02.085510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.471 [2024-05-14 02:21:02.085526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:122768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.471 [2024-05-14 02:21:02.085540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.471 [2024-05-14 02:21:02.085555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:122776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.471 [2024-05-14 02:21:02.085569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.471 [2024-05-14 02:21:02.085584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:122784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.471 [2024-05-14 02:21:02.085598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.471 [2024-05-14 02:21:02.085614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:122792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.471 [2024-05-14 02:21:02.085628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.472 [2024-05-14 02:21:02.085643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:122800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.472 [2024-05-14 02:21:02.085657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.472 [2024-05-14 02:21:02.085678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:122808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.472 [2024-05-14 02:21:02.085693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.472 [2024-05-14 02:21:02.085708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:122816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.472 [2024-05-14 02:21:02.085722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.472 [2024-05-14 02:21:02.085737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:122824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.472 [2024-05-14 02:21:02.085751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.472 [2024-05-14 02:21:02.085777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:122832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.472 [2024-05-14 02:21:02.085793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.472 [2024-05-14 02:21:02.085808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:122840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.472 [2024-05-14 02:21:02.085824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.472 [2024-05-14 02:21:02.085840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:122848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.472 [2024-05-14 02:21:02.085854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.472 [2024-05-14 02:21:02.085870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:122856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.472 [2024-05-14 02:21:02.085885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.472 [2024-05-14 02:21:02.085900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:122864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.472 [2024-05-14 02:21:02.085916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.472 [2024-05-14 02:21:02.085932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:122872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.472 [2024-05-14 02:21:02.085947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.472 [2024-05-14 02:21:02.085975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:122880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.472 [2024-05-14 02:21:02.085992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.472 [2024-05-14 02:21:02.086007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:122888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.472 [2024-05-14 02:21:02.086021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.472 [2024-05-14 02:21:02.086037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:122896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.472 [2024-05-14 02:21:02.086051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.472 [2024-05-14 02:21:02.086066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:122904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.472 [2024-05-14 02:21:02.086091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.472 [2024-05-14 02:21:02.086107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:122288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.472 [2024-05-14 02:21:02.086122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.472 [2024-05-14 02:21:02.086137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:122296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.472 [2024-05-14 02:21:02.086151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.472 [2024-05-14 02:21:02.086167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:122312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.472 [2024-05-14 02:21:02.086181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.472 [2024-05-14 02:21:02.086198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:122320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.472 [2024-05-14 02:21:02.086212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.472 [2024-05-14 02:21:02.086227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:122328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.472 [2024-05-14 02:21:02.086241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.472 [2024-05-14 02:21:02.086257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:122336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.472 [2024-05-14 02:21:02.086271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.472 [2024-05-14 02:21:02.086286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:122344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.472 [2024-05-14 02:21:02.086300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.472 [2024-05-14 02:21:02.086315] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77fbd0 is same with the state(5) to be set 00:24:08.472 [2024-05-14 02:21:02.086335] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:08.472 [2024-05-14 02:21:02.086347] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:08.472 [2024-05-14 02:21:02.086359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122352 len:8 PRP1 0x0 PRP2 0x0 00:24:08.472 [2024-05-14 02:21:02.086372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.472 [2024-05-14 02:21:02.086443] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x77fbd0 was disconnected and freed. reset controller. 00:24:08.472 [2024-05-14 02:21:02.087785] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.472 [2024-05-14 02:21:02.087875] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91a6e0 (9): Bad file descriptor 00:24:08.472 [2024-05-14 02:21:02.088012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.472 [2024-05-14 02:21:02.088077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.472 [2024-05-14 02:21:02.088102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91a6e0 with addr=10.0.0.2, port=4421 00:24:08.472 [2024-05-14 02:21:02.088118] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91a6e0 is same with the state(5) to be set 00:24:08.472 [2024-05-14 02:21:02.088159] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91a6e0 (9): Bad file descriptor 00:24:08.472 [2024-05-14 02:21:02.088184] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.472 [2024-05-14 02:21:02.088199] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.472 [2024-05-14 02:21:02.088214] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.472 [2024-05-14 02:21:02.088334] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.472 [2024-05-14 02:21:02.088359] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.472 [2024-05-14 02:21:12.148960] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:08.472 Received shutdown signal, test time was about 55.185525 seconds 00:24:08.472 00:24:08.472 Latency(us) 00:24:08.472 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:08.472 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:08.472 Verification LBA range: start 0x0 length 0x4000 00:24:08.472 Nvme0n1 : 55.18 10058.56 39.29 0.00 0.00 12707.44 215.04 7107438.78 00:24:08.472 =================================================================================================================== 00:24:08.472 Total : 10058.56 39.29 0.00 0.00 12707.44 215.04 7107438.78 00:24:08.472 02:21:22 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:24:08.472 02:21:22 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:08.472 02:21:22 -- host/multipath.sh@125 -- # nvmftestfini 00:24:08.472 02:21:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:08.472 02:21:22 -- nvmf/common.sh@116 -- # sync 00:24:08.472 02:21:22 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:08.472 02:21:22 -- nvmf/common.sh@119 -- # set +e 00:24:08.472 02:21:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:08.472 02:21:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:08.472 rmmod nvme_tcp 00:24:08.472 rmmod nvme_fabrics 00:24:08.472 rmmod nvme_keyring 00:24:08.472 02:21:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:08.472 02:21:22 -- nvmf/common.sh@123 -- # set -e 00:24:08.472 02:21:22 -- nvmf/common.sh@124 -- # return 0 00:24:08.472 02:21:22 -- nvmf/common.sh@477 -- # '[' -n 86018 ']' 00:24:08.472 02:21:22 -- nvmf/common.sh@478 -- # killprocess 86018 00:24:08.472 02:21:22 -- common/autotest_common.sh@926 -- # '[' -z 86018 ']' 00:24:08.472 02:21:22 -- common/autotest_common.sh@930 -- # kill -0 86018 00:24:08.472 02:21:22 -- common/autotest_common.sh@931 -- # uname 00:24:08.472 02:21:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:08.472 02:21:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 86018 00:24:08.472 02:21:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:08.472 killing process with pid 86018 00:24:08.472 02:21:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:08.472 02:21:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 86018' 00:24:08.472 02:21:22 -- common/autotest_common.sh@945 -- # kill 86018 00:24:08.472 02:21:22 -- common/autotest_common.sh@950 -- # wait 86018 00:24:08.472 02:21:22 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:08.472 02:21:22 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:08.472 02:21:22 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:08.472 02:21:22 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:08.472 02:21:22 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:08.472 02:21:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:08.472 02:21:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:08.472 02:21:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:08.473 02:21:23 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:24:08.473 00:24:08.473 real 1m1.327s 00:24:08.473 user 2m53.107s 00:24:08.473 sys 0m14.030s 00:24:08.473 02:21:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:08.473 ************************************ 00:24:08.473 END TEST nvmf_multipath 00:24:08.473 ************************************ 00:24:08.473 02:21:23 -- common/autotest_common.sh@10 -- # set +x 00:24:08.732 02:21:23 -- nvmf/nvmf.sh@116 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:24:08.732 02:21:23 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:08.732 02:21:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:08.732 02:21:23 -- common/autotest_common.sh@10 -- # set +x 00:24:08.732 ************************************ 00:24:08.732 START TEST nvmf_timeout 00:24:08.732 ************************************ 00:24:08.732 02:21:23 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:24:08.732 * Looking for test storage... 00:24:08.732 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:08.732 02:21:23 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:08.732 02:21:23 -- nvmf/common.sh@7 -- # uname -s 00:24:08.732 02:21:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:08.732 02:21:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:08.732 02:21:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:08.732 02:21:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:08.732 02:21:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:08.732 02:21:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:08.732 02:21:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:08.732 02:21:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:08.732 02:21:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:08.732 02:21:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:08.732 02:21:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:24:08.732 02:21:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:24:08.732 02:21:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:08.732 02:21:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:08.732 02:21:23 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:08.732 02:21:23 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:08.732 02:21:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:08.732 02:21:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:08.732 02:21:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:08.732 02:21:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.732 02:21:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.732 02:21:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.732 02:21:23 -- paths/export.sh@5 -- # export PATH 00:24:08.732 02:21:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.732 02:21:23 -- nvmf/common.sh@46 -- # : 0 00:24:08.732 02:21:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:08.732 02:21:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:08.732 02:21:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:08.732 02:21:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:08.732 02:21:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:08.732 02:21:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:08.732 02:21:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:08.732 02:21:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:08.732 02:21:23 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:08.732 02:21:23 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:08.732 02:21:23 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:08.732 02:21:23 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:24:08.732 02:21:23 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:08.732 02:21:23 -- host/timeout.sh@19 -- # nvmftestinit 00:24:08.732 02:21:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:08.732 02:21:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:08.732 02:21:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:08.732 02:21:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:08.732 02:21:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:08.732 02:21:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:08.732 02:21:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:08.732 02:21:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:08.732 02:21:23 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:24:08.732 02:21:23 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:24:08.732 02:21:23 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:24:08.732 02:21:23 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:24:08.732 02:21:23 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:24:08.732 02:21:23 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:24:08.732 02:21:23 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:08.732 02:21:23 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:08.732 02:21:23 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:08.732 02:21:23 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:24:08.732 02:21:23 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:08.732 02:21:23 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:08.732 02:21:23 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:08.732 02:21:23 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:08.732 02:21:23 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:08.732 02:21:23 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:08.732 02:21:23 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:08.732 02:21:23 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:08.732 02:21:23 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:24:08.732 02:21:23 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:24:08.732 Cannot find device "nvmf_tgt_br" 00:24:08.732 02:21:23 -- nvmf/common.sh@154 -- # true 00:24:08.732 02:21:23 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:24:08.732 Cannot find device "nvmf_tgt_br2" 00:24:08.733 02:21:23 -- nvmf/common.sh@155 -- # true 00:24:08.733 02:21:23 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:24:08.733 02:21:23 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:24:08.733 Cannot find device "nvmf_tgt_br" 00:24:08.733 02:21:23 -- nvmf/common.sh@157 -- # true 00:24:08.733 02:21:23 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:24:08.733 Cannot find device "nvmf_tgt_br2" 00:24:08.733 02:21:23 -- nvmf/common.sh@158 -- # true 00:24:08.733 02:21:23 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:24:08.733 02:21:23 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:24:08.733 02:21:23 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:08.733 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:08.733 02:21:23 -- nvmf/common.sh@161 -- # true 00:24:08.733 02:21:23 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:09.016 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:09.016 02:21:23 -- nvmf/common.sh@162 -- # true 00:24:09.016 02:21:23 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:24:09.016 02:21:23 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:09.016 02:21:23 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:09.016 02:21:23 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:09.016 02:21:23 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:09.016 02:21:23 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:09.016 02:21:23 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:09.016 02:21:23 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:09.016 02:21:23 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:09.016 02:21:23 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:24:09.016 02:21:23 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:24:09.016 02:21:23 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:24:09.016 02:21:23 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:24:09.016 02:21:23 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:09.016 02:21:23 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:09.016 02:21:23 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:09.016 02:21:23 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:24:09.016 02:21:23 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:24:09.016 02:21:23 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:24:09.016 02:21:23 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:09.016 02:21:23 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:09.016 02:21:23 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:09.016 02:21:23 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:09.016 02:21:23 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:24:09.016 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:09.016 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:24:09.016 00:24:09.016 --- 10.0.0.2 ping statistics --- 00:24:09.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:09.016 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:24:09.016 02:21:23 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:24:09.016 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:09.016 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:24:09.016 00:24:09.016 --- 10.0.0.3 ping statistics --- 00:24:09.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:09.016 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:24:09.016 02:21:23 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:09.016 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:09.016 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:24:09.016 00:24:09.016 --- 10.0.0.1 ping statistics --- 00:24:09.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:09.016 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:24:09.016 02:21:23 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:09.016 02:21:23 -- nvmf/common.sh@421 -- # return 0 00:24:09.016 02:21:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:09.016 02:21:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:09.016 02:21:23 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:09.016 02:21:23 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:09.016 02:21:23 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:09.016 02:21:23 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:09.016 02:21:23 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:09.016 02:21:23 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:24:09.016 02:21:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:09.016 02:21:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:09.016 02:21:23 -- common/autotest_common.sh@10 -- # set +x 00:24:09.016 02:21:23 -- nvmf/common.sh@469 -- # nvmfpid=87386 00:24:09.016 02:21:23 -- nvmf/common.sh@470 -- # waitforlisten 87386 00:24:09.016 02:21:23 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:09.016 02:21:23 -- common/autotest_common.sh@819 -- # '[' -z 87386 ']' 00:24:09.016 02:21:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:09.016 02:21:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:09.016 02:21:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:09.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:09.016 02:21:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:09.016 02:21:23 -- common/autotest_common.sh@10 -- # set +x 00:24:09.016 [2024-05-14 02:21:23.586218] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:09.016 [2024-05-14 02:21:23.586309] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:09.287 [2024-05-14 02:21:23.727940] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:09.287 [2024-05-14 02:21:23.801794] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:09.287 [2024-05-14 02:21:23.802022] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:09.287 [2024-05-14 02:21:23.802045] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:09.287 [2024-05-14 02:21:23.802056] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:09.287 [2024-05-14 02:21:23.802575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:09.287 [2024-05-14 02:21:23.802590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:10.227 02:21:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:10.227 02:21:24 -- common/autotest_common.sh@852 -- # return 0 00:24:10.227 02:21:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:10.227 02:21:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:10.227 02:21:24 -- common/autotest_common.sh@10 -- # set +x 00:24:10.227 02:21:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:10.227 02:21:24 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:10.227 02:21:24 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:10.487 [2024-05-14 02:21:24.925173] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:10.487 02:21:24 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:10.746 Malloc0 00:24:10.746 02:21:25 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:11.005 02:21:25 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:11.263 02:21:25 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:11.521 [2024-05-14 02:21:25.964363] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:11.521 02:21:25 -- host/timeout.sh@32 -- # bdevperf_pid=87477 00:24:11.521 02:21:25 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:24:11.521 02:21:25 -- host/timeout.sh@34 -- # waitforlisten 87477 /var/tmp/bdevperf.sock 00:24:11.521 02:21:25 -- common/autotest_common.sh@819 -- # '[' -z 87477 ']' 00:24:11.521 02:21:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:11.521 02:21:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:11.521 02:21:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:11.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:11.521 02:21:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:11.521 02:21:25 -- common/autotest_common.sh@10 -- # set +x 00:24:11.521 [2024-05-14 02:21:26.038710] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:11.521 [2024-05-14 02:21:26.038808] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87477 ] 00:24:11.780 [2024-05-14 02:21:26.180238] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.780 [2024-05-14 02:21:26.251468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:12.716 02:21:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:12.716 02:21:27 -- common/autotest_common.sh@852 -- # return 0 00:24:12.716 02:21:27 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:12.716 02:21:27 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:24:12.975 NVMe0n1 00:24:12.975 02:21:27 -- host/timeout.sh@51 -- # rpc_pid=87525 00:24:12.975 02:21:27 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:12.975 02:21:27 -- host/timeout.sh@53 -- # sleep 1 00:24:13.234 Running I/O for 10 seconds... 00:24:14.171 02:21:28 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:14.433 [2024-05-14 02:21:28.816122] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b3e20 is same with the state(5) to be set 00:24:14.433 [2024-05-14 02:21:28.816815] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b3e20 is same with the state(5) to be set 00:24:14.433 [2024-05-14 02:21:28.816945] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b3e20 is same with the state(5) to be set 00:24:14.433 [2024-05-14 02:21:28.817022] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b3e20 is same with the state(5) to be set 00:24:14.433 [2024-05-14 02:21:28.817097] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b3e20 is same with the state(5) to be set 00:24:14.433 [2024-05-14 02:21:28.817193] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b3e20 is same with the state(5) to be set 00:24:14.433 [2024-05-14 02:21:28.817209] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b3e20 is same with the state(5) to be set 00:24:14.433 [2024-05-14 02:21:28.817234] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b3e20 is same with the state(5) to be set 00:24:14.433 [2024-05-14 02:21:28.817242] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b3e20 is same with the state(5) to be set 00:24:14.433 [2024-05-14 02:21:28.817251] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b3e20 is same with the state(5) to be set 00:24:14.433 [2024-05-14 02:21:28.817259] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b3e20 is same with the state(5) to be set 00:24:14.433 [2024-05-14 02:21:28.817267] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b3e20 is same with the state(5) to be set 00:24:14.433 [2024-05-14 02:21:28.817275] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b3e20 is same with the state(5) to be set 00:24:14.433 [2024-05-14 02:21:28.817283] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b3e20 is same with the state(5) to be set 00:24:14.433 [2024-05-14 02:21:28.817291] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b3e20 is same with the state(5) to be set 00:24:14.433 [2024-05-14 02:21:28.817299] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b3e20 is same with the state(5) to be set 00:24:14.433 [2024-05-14 02:21:28.817307] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b3e20 is same with the state(5) to be set 00:24:14.433 [2024-05-14 02:21:28.817332] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b3e20 is same with the state(5) to be set 00:24:14.433 [2024-05-14 02:21:28.817340] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b3e20 is same with the state(5) to be set 00:24:14.433 [2024-05-14 02:21:28.817348] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b3e20 is same with the state(5) to be set 00:24:14.433 [2024-05-14 02:21:28.817356] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b3e20 is same with the state(5) to be set 00:24:14.433 [2024-05-14 02:21:28.817364] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b3e20 is same with the state(5) to be set 00:24:14.433 [2024-05-14 02:21:28.817372] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b3e20 is same with the state(5) to be set 00:24:14.433 [2024-05-14 02:21:28.817382] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b3e20 is same with the state(5) to be set 00:24:14.433 [2024-05-14 02:21:28.817390] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b3e20 is same with the state(5) to be set 00:24:14.433 [2024-05-14 02:21:28.817398] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b3e20 is same with the state(5) to be set 00:24:14.433 [2024-05-14 02:21:28.817406] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b3e20 is same with the state(5) to be set 00:24:14.433 [2024-05-14 02:21:28.817414] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b3e20 is same with the state(5) to be set 00:24:14.433 [2024-05-14 02:21:28.817422] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b3e20 is same with the state(5) to be set 00:24:14.433 [2024-05-14 02:21:28.817430] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b3e20 is same with the state(5) to be set 00:24:14.433 [2024-05-14 02:21:28.817438] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b3e20 is same with the state(5) to be set 00:24:14.433 [2024-05-14 02:21:28.817446] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b3e20 is same with the state(5) to be set 00:24:14.433 [2024-05-14 02:21:28.817454] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b3e20 is same with the state(5) to be set 00:24:14.433 [2024-05-14 02:21:28.817462] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b3e20 is same with the state(5) to be set 00:24:14.433 [2024-05-14 02:21:28.817470] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b3e20 is same with the state(5) to be set 00:24:14.433 [2024-05-14 02:21:28.817478] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b3e20 is same with the state(5) to be set 00:24:14.433 [2024-05-14 02:21:28.817486] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b3e20 is same with the state(5) to be set 00:24:14.433 [2024-05-14 02:21:28.817494] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b3e20 is same with the state(5) to be set 00:24:14.433 [2024-05-14 02:21:28.817504] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b3e20 is same with the state(5) to be set 00:24:14.433 [2024-05-14 02:21:28.817512] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b3e20 is same with the state(5) to be set 00:24:14.433 [2024-05-14 02:21:28.817521] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b3e20 is same with the state(5) to be set 00:24:14.433 [2024-05-14 02:21:28.817529] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b3e20 is same with the state(5) to be set 00:24:14.433 [2024-05-14 02:21:28.817539] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b3e20 is same with the state(5) to be set 00:24:14.433 [2024-05-14 02:21:28.817547] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b3e20 is same with the state(5) to be set 00:24:14.433 [2024-05-14 02:21:28.817556] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b3e20 is same with the state(5) to be set 00:24:14.433 [2024-05-14 02:21:28.817578] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b3e20 is same with the state(5) to be set 00:24:14.433 [2024-05-14 02:21:28.817586] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b3e20 is same with the state(5) to be set 00:24:14.433 [2024-05-14 02:21:28.817594] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b3e20 is same with the state(5) to be set 00:24:14.433 [2024-05-14 02:21:28.817607] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b3e20 is same with the state(5) to be set 00:24:14.433 [2024-05-14 02:21:28.817615] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b3e20 is same with the state(5) to be set 00:24:14.433 [2024-05-14 02:21:28.817623] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b3e20 is same with the state(5) to be set 00:24:14.433 [2024-05-14 02:21:28.818041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:109736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.433 [2024-05-14 02:21:28.818072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.433 [2024-05-14 02:21:28.818095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:109760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.433 [2024-05-14 02:21:28.818107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.433 [2024-05-14 02:21:28.818121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:109768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.433 [2024-05-14 02:21:28.818131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.433 [2024-05-14 02:21:28.818144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:109776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.433 [2024-05-14 02:21:28.818155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.433 [2024-05-14 02:21:28.818167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:109784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.433 [2024-05-14 02:21:28.818178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.433 [2024-05-14 02:21:28.818191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:109808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.433 [2024-05-14 02:21:28.818201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.433 [2024-05-14 02:21:28.818213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:109816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.433 [2024-05-14 02:21:28.818224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.433 [2024-05-14 02:21:28.818236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:109888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.433 [2024-05-14 02:21:28.818247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.433 [2024-05-14 02:21:28.818259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:110320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.433 [2024-05-14 02:21:28.818273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.433 [2024-05-14 02:21:28.818285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:110336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.433 [2024-05-14 02:21:28.818296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.433 [2024-05-14 02:21:28.818308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:110344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.433 [2024-05-14 02:21:28.818333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.433 [2024-05-14 02:21:28.818345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:110360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.433 [2024-05-14 02:21:28.818359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.433 [2024-05-14 02:21:28.818388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:110376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.433 [2024-05-14 02:21:28.818414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.434 [2024-05-14 02:21:28.818427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:110392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.434 [2024-05-14 02:21:28.818437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.434 [2024-05-14 02:21:28.818449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:110400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.434 [2024-05-14 02:21:28.818483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.434 [2024-05-14 02:21:28.818496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:110408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.434 [2024-05-14 02:21:28.818507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.434 [2024-05-14 02:21:28.818519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:110416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.434 [2024-05-14 02:21:28.818530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.434 [2024-05-14 02:21:28.818542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:110424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.434 [2024-05-14 02:21:28.818553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.434 [2024-05-14 02:21:28.818565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:110432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.434 [2024-05-14 02:21:28.818575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.434 [2024-05-14 02:21:28.818588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:109896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.434 [2024-05-14 02:21:28.818599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.434 [2024-05-14 02:21:28.818611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:109952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.434 [2024-05-14 02:21:28.818622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.434 [2024-05-14 02:21:28.818634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:109960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.434 [2024-05-14 02:21:28.818644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.434 [2024-05-14 02:21:28.818657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:109984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.434 [2024-05-14 02:21:28.818667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.434 [2024-05-14 02:21:28.818680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:109992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.434 [2024-05-14 02:21:28.818690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.434 [2024-05-14 02:21:28.818703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:110000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.434 [2024-05-14 02:21:28.818713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.434 [2024-05-14 02:21:28.818726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:110016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.434 [2024-05-14 02:21:28.818737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.434 [2024-05-14 02:21:28.818750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:110032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.434 [2024-05-14 02:21:28.818760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.434 [2024-05-14 02:21:28.818786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:110456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.434 [2024-05-14 02:21:28.818802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.434 [2024-05-14 02:21:28.818816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:110480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.434 [2024-05-14 02:21:28.818827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.434 [2024-05-14 02:21:28.818841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:110488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.434 [2024-05-14 02:21:28.818859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.434 [2024-05-14 02:21:28.818875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:110496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.434 [2024-05-14 02:21:28.818885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.434 [2024-05-14 02:21:28.818898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:110520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.434 [2024-05-14 02:21:28.818908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.434 [2024-05-14 02:21:28.818921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:110528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.434 [2024-05-14 02:21:28.818932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.434 [2024-05-14 02:21:28.818944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:110536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.434 [2024-05-14 02:21:28.818955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.434 [2024-05-14 02:21:28.818967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:110544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.434 [2024-05-14 02:21:28.818978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.434 [2024-05-14 02:21:28.818990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:110552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.434 [2024-05-14 02:21:28.819000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.434 [2024-05-14 02:21:28.819013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:110560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.434 [2024-05-14 02:21:28.819023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.434 [2024-05-14 02:21:28.819035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:110568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.434 [2024-05-14 02:21:28.819046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.434 [2024-05-14 02:21:28.819058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:110576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.434 [2024-05-14 02:21:28.819069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.434 [2024-05-14 02:21:28.819081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:110584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.434 [2024-05-14 02:21:28.819092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.434 [2024-05-14 02:21:28.819105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:110592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.434 [2024-05-14 02:21:28.819116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.434 [2024-05-14 02:21:28.819129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:110600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.434 [2024-05-14 02:21:28.819139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.434 [2024-05-14 02:21:28.819152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:110608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.434 [2024-05-14 02:21:28.819163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.434 [2024-05-14 02:21:28.819176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:110616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.434 [2024-05-14 02:21:28.819189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.434 [2024-05-14 02:21:28.819202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:110624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.434 [2024-05-14 02:21:28.819213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.434 [2024-05-14 02:21:28.819226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:110632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.434 [2024-05-14 02:21:28.819237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.434 [2024-05-14 02:21:28.819250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:110640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.434 [2024-05-14 02:21:28.819260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.434 [2024-05-14 02:21:28.819272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:110648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.434 [2024-05-14 02:21:28.819283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.434 [2024-05-14 02:21:28.819295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:110656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.434 [2024-05-14 02:21:28.819306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.434 [2024-05-14 02:21:28.819319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:110040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.434 [2024-05-14 02:21:28.819329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.434 [2024-05-14 02:21:28.819341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:110056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.434 [2024-05-14 02:21:28.819352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.434 [2024-05-14 02:21:28.819364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:110088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.434 [2024-05-14 02:21:28.819375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.434 [2024-05-14 02:21:28.819387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:110096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.434 [2024-05-14 02:21:28.819400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.434 [2024-05-14 02:21:28.819413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:110104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.434 [2024-05-14 02:21:28.819423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.435 [2024-05-14 02:21:28.819436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:110136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.435 [2024-05-14 02:21:28.819447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.435 [2024-05-14 02:21:28.819460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:110144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.435 [2024-05-14 02:21:28.819471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.435 [2024-05-14 02:21:28.819483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:110168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.435 [2024-05-14 02:21:28.819494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.435 [2024-05-14 02:21:28.819506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:110664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.435 [2024-05-14 02:21:28.819517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.435 [2024-05-14 02:21:28.819529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:110672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.435 [2024-05-14 02:21:28.819540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.435 [2024-05-14 02:21:28.819552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:110680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.435 [2024-05-14 02:21:28.819565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.435 [2024-05-14 02:21:28.819578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:110688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.435 [2024-05-14 02:21:28.819589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.435 [2024-05-14 02:21:28.819601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:110696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.435 [2024-05-14 02:21:28.819612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.435 [2024-05-14 02:21:28.819624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:110704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.435 [2024-05-14 02:21:28.819634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.435 [2024-05-14 02:21:28.819647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:110712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.435 [2024-05-14 02:21:28.819658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.435 [2024-05-14 02:21:28.819670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:110720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.435 [2024-05-14 02:21:28.819680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.435 [2024-05-14 02:21:28.819693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:110728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.435 [2024-05-14 02:21:28.819703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.435 [2024-05-14 02:21:28.819716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:110736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.435 [2024-05-14 02:21:28.819726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.435 [2024-05-14 02:21:28.819739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:110744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.435 [2024-05-14 02:21:28.819749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.435 [2024-05-14 02:21:28.819774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:110752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.435 [2024-05-14 02:21:28.819789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.435 [2024-05-14 02:21:28.819802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:110760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.435 [2024-05-14 02:21:28.819813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.435 [2024-05-14 02:21:28.819825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:110768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.435 [2024-05-14 02:21:28.819836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.435 [2024-05-14 02:21:28.819849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:110776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.435 [2024-05-14 02:21:28.819862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.435 [2024-05-14 02:21:28.819874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:110784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.435 [2024-05-14 02:21:28.819885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.435 [2024-05-14 02:21:28.819897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:110792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.435 [2024-05-14 02:21:28.819908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.435 [2024-05-14 02:21:28.819920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.435 [2024-05-14 02:21:28.819930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.435 [2024-05-14 02:21:28.819943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:110176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.435 [2024-05-14 02:21:28.819955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.435 [2024-05-14 02:21:28.819968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:110184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.435 [2024-05-14 02:21:28.819979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.435 [2024-05-14 02:21:28.819991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:110208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.435 [2024-05-14 02:21:28.820002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.435 [2024-05-14 02:21:28.820014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:110216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.435 [2024-05-14 02:21:28.820024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.435 [2024-05-14 02:21:28.820036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:110224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.435 [2024-05-14 02:21:28.820047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.435 [2024-05-14 02:21:28.820059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:110232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.435 [2024-05-14 02:21:28.820070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.435 [2024-05-14 02:21:28.820082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:110240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.435 [2024-05-14 02:21:28.820093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.435 [2024-05-14 02:21:28.820105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:110256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.435 [2024-05-14 02:21:28.820116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.435 [2024-05-14 02:21:28.820128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:110808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.435 [2024-05-14 02:21:28.820138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.435 [2024-05-14 02:21:28.820152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:110816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.435 [2024-05-14 02:21:28.820165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.435 [2024-05-14 02:21:28.820179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:110824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.435 [2024-05-14 02:21:28.820190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.435 [2024-05-14 02:21:28.820202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:110832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.435 [2024-05-14 02:21:28.820213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.435 [2024-05-14 02:21:28.820226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:110840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.435 [2024-05-14 02:21:28.820236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.435 [2024-05-14 02:21:28.820248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:110848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.435 [2024-05-14 02:21:28.820259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.435 [2024-05-14 02:21:28.820271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:110856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.435 [2024-05-14 02:21:28.820282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.435 [2024-05-14 02:21:28.820294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:110864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.435 [2024-05-14 02:21:28.820305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.435 [2024-05-14 02:21:28.820317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:110872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.435 [2024-05-14 02:21:28.820330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.435 [2024-05-14 02:21:28.820342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:110880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.435 [2024-05-14 02:21:28.820353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.435 [2024-05-14 02:21:28.820366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:110888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.435 [2024-05-14 02:21:28.820376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.435 [2024-05-14 02:21:28.820388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:110896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.436 [2024-05-14 02:21:28.820399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.436 [2024-05-14 02:21:28.820411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:110904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.436 [2024-05-14 02:21:28.820422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.436 [2024-05-14 02:21:28.820434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.436 [2024-05-14 02:21:28.820444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.436 [2024-05-14 02:21:28.820457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:110920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.436 [2024-05-14 02:21:28.820467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.436 [2024-05-14 02:21:28.820480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:110928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.436 [2024-05-14 02:21:28.820491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.436 [2024-05-14 02:21:28.820503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:110936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.436 [2024-05-14 02:21:28.820514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.436 [2024-05-14 02:21:28.820526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:110272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.436 [2024-05-14 02:21:28.820538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.436 [2024-05-14 02:21:28.820551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:110280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.436 [2024-05-14 02:21:28.820562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.436 [2024-05-14 02:21:28.820574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:110288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.436 [2024-05-14 02:21:28.820585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.436 [2024-05-14 02:21:28.820597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:110296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.436 [2024-05-14 02:21:28.820608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.436 [2024-05-14 02:21:28.820622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:110304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.436 [2024-05-14 02:21:28.820632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.436 [2024-05-14 02:21:28.820644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:110312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.436 [2024-05-14 02:21:28.820654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.436 [2024-05-14 02:21:28.820667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:110328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.436 [2024-05-14 02:21:28.820678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.436 [2024-05-14 02:21:28.820690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:110352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.436 [2024-05-14 02:21:28.820702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.436 [2024-05-14 02:21:28.820715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:110944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.436 [2024-05-14 02:21:28.820726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.436 [2024-05-14 02:21:28.820738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:110952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.436 [2024-05-14 02:21:28.820749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.436 [2024-05-14 02:21:28.820773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:110960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.436 [2024-05-14 02:21:28.820786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.436 [2024-05-14 02:21:28.820798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:110968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.436 [2024-05-14 02:21:28.820809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.436 [2024-05-14 02:21:28.820822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:110976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.436 [2024-05-14 02:21:28.820832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.436 [2024-05-14 02:21:28.820844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:110984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.436 [2024-05-14 02:21:28.820855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.436 [2024-05-14 02:21:28.820867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:110992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.436 [2024-05-14 02:21:28.820878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.436 [2024-05-14 02:21:28.820891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:111000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.436 [2024-05-14 02:21:28.820901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.436 [2024-05-14 02:21:28.820913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:111008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.436 [2024-05-14 02:21:28.820927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.436 [2024-05-14 02:21:28.820940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:111016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.436 [2024-05-14 02:21:28.820950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.436 [2024-05-14 02:21:28.820962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:111024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.436 [2024-05-14 02:21:28.820975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.436 [2024-05-14 02:21:28.820987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:111032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.436 [2024-05-14 02:21:28.820998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.436 [2024-05-14 02:21:28.821010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:110368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.436 [2024-05-14 02:21:28.821021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.436 [2024-05-14 02:21:28.821033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:110384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.436 [2024-05-14 02:21:28.821044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.436 [2024-05-14 02:21:28.821056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:110440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.436 [2024-05-14 02:21:28.821067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.436 [2024-05-14 02:21:28.821079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:110448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.436 [2024-05-14 02:21:28.821092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.436 [2024-05-14 02:21:28.821105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:110464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.436 [2024-05-14 02:21:28.821115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.436 [2024-05-14 02:21:28.821128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:110472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.436 [2024-05-14 02:21:28.821138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.436 [2024-05-14 02:21:28.821151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:110504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.436 [2024-05-14 02:21:28.821161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.436 [2024-05-14 02:21:28.821173] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2514420 is same with the state(5) to be set 00:24:14.436 [2024-05-14 02:21:28.821186] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:14.436 [2024-05-14 02:21:28.821194] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:14.436 [2024-05-14 02:21:28.821215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110512 len:8 PRP1 0x0 PRP2 0x0 00:24:14.436 [2024-05-14 02:21:28.821224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.436 [2024-05-14 02:21:28.821269] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2514420 was disconnected and freed. reset controller. 00:24:14.436 [2024-05-14 02:21:28.821524] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:14.436 [2024-05-14 02:21:28.821604] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24cd170 (9): Bad file descriptor 00:24:14.436 [2024-05-14 02:21:28.821707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.436 [2024-05-14 02:21:28.821778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.437 [2024-05-14 02:21:28.821799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24cd170 with addr=10.0.0.2, port=4420 00:24:14.437 [2024-05-14 02:21:28.821812] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cd170 is same with the state(5) to be set 00:24:14.437 [2024-05-14 02:21:28.821832] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24cd170 (9): Bad file descriptor 00:24:14.437 [2024-05-14 02:21:28.821850] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:14.437 [2024-05-14 02:21:28.821860] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:14.437 [2024-05-14 02:21:28.821870] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:14.437 [2024-05-14 02:21:28.821891] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:14.437 [2024-05-14 02:21:28.821903] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:14.437 02:21:28 -- host/timeout.sh@56 -- # sleep 2 00:24:16.342 [2024-05-14 02:21:30.822110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.342 [2024-05-14 02:21:30.822203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.342 [2024-05-14 02:21:30.822225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24cd170 with addr=10.0.0.2, port=4420 00:24:16.342 [2024-05-14 02:21:30.822239] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cd170 is same with the state(5) to be set 00:24:16.342 [2024-05-14 02:21:30.822265] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24cd170 (9): Bad file descriptor 00:24:16.342 [2024-05-14 02:21:30.822312] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.342 [2024-05-14 02:21:30.822339] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.342 [2024-05-14 02:21:30.822350] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.342 [2024-05-14 02:21:30.822376] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.342 [2024-05-14 02:21:30.822388] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.342 02:21:30 -- host/timeout.sh@57 -- # get_controller 00:24:16.342 02:21:30 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:24:16.342 02:21:30 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:16.601 02:21:31 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:24:16.601 02:21:31 -- host/timeout.sh@58 -- # get_bdev 00:24:16.601 02:21:31 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:24:16.601 02:21:31 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:24:16.860 02:21:31 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:24:16.860 02:21:31 -- host/timeout.sh@61 -- # sleep 5 00:24:18.239 [2024-05-14 02:21:32.822595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.239 [2024-05-14 02:21:32.822698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.239 [2024-05-14 02:21:32.822720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24cd170 with addr=10.0.0.2, port=4420 00:24:18.239 [2024-05-14 02:21:32.822735] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cd170 is same with the state(5) to be set 00:24:18.239 [2024-05-14 02:21:32.822762] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24cd170 (9): Bad file descriptor 00:24:18.239 [2024-05-14 02:21:32.822797] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.239 [2024-05-14 02:21:32.822808] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.239 [2024-05-14 02:21:32.822819] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.239 [2024-05-14 02:21:32.822846] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.239 [2024-05-14 02:21:32.822858] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:20.772 [2024-05-14 02:21:34.822988] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:21.338 00:24:21.338 Latency(us) 00:24:21.338 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:21.338 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:21.338 Verification LBA range: start 0x0 length 0x4000 00:24:21.338 NVMe0n1 : 8.17 1684.16 6.58 15.68 0.00 75182.78 3202.33 7015926.69 00:24:21.338 =================================================================================================================== 00:24:21.338 Total : 1684.16 6.58 15.68 0.00 75182.78 3202.33 7015926.69 00:24:21.338 0 00:24:21.905 02:21:36 -- host/timeout.sh@62 -- # get_controller 00:24:21.905 02:21:36 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:21.905 02:21:36 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:24:22.164 02:21:36 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:24:22.164 02:21:36 -- host/timeout.sh@63 -- # get_bdev 00:24:22.164 02:21:36 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:24:22.164 02:21:36 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:24:22.443 02:21:36 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:24:22.443 02:21:36 -- host/timeout.sh@65 -- # wait 87525 00:24:22.443 02:21:36 -- host/timeout.sh@67 -- # killprocess 87477 00:24:22.443 02:21:36 -- common/autotest_common.sh@926 -- # '[' -z 87477 ']' 00:24:22.443 02:21:36 -- common/autotest_common.sh@930 -- # kill -0 87477 00:24:22.443 02:21:36 -- common/autotest_common.sh@931 -- # uname 00:24:22.444 02:21:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:22.444 02:21:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 87477 00:24:22.444 02:21:36 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:24:22.444 killing process with pid 87477 00:24:22.444 Received shutdown signal, test time was about 9.227179 seconds 00:24:22.444 00:24:22.444 Latency(us) 00:24:22.444 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:22.444 =================================================================================================================== 00:24:22.444 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:22.444 02:21:36 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:24:22.444 02:21:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 87477' 00:24:22.444 02:21:36 -- common/autotest_common.sh@945 -- # kill 87477 00:24:22.444 02:21:36 -- common/autotest_common.sh@950 -- # wait 87477 00:24:22.702 02:21:37 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:22.960 [2024-05-14 02:21:37.294658] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:22.960 02:21:37 -- host/timeout.sh@74 -- # bdevperf_pid=87677 00:24:22.960 02:21:37 -- host/timeout.sh@76 -- # waitforlisten 87677 /var/tmp/bdevperf.sock 00:24:22.960 02:21:37 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:24:22.960 02:21:37 -- common/autotest_common.sh@819 -- # '[' -z 87677 ']' 00:24:22.960 02:21:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:22.960 02:21:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:22.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:22.960 02:21:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:22.960 02:21:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:22.960 02:21:37 -- common/autotest_common.sh@10 -- # set +x 00:24:22.960 [2024-05-14 02:21:37.368904] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:22.960 [2024-05-14 02:21:37.369024] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87677 ] 00:24:22.960 [2024-05-14 02:21:37.509043] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:23.218 [2024-05-14 02:21:37.571145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:23.783 02:21:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:23.783 02:21:38 -- common/autotest_common.sh@852 -- # return 0 00:24:23.783 02:21:38 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:24.041 02:21:38 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:24:24.299 NVMe0n1 00:24:24.557 02:21:38 -- host/timeout.sh@84 -- # rpc_pid=87725 00:24:24.557 02:21:38 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:24.557 02:21:38 -- host/timeout.sh@86 -- # sleep 1 00:24:24.557 Running I/O for 10 seconds... 00:24:25.493 02:21:39 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:25.753 [2024-05-14 02:21:40.127592] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4600 is same with the state(5) to be set 00:24:25.753 [2024-05-14 02:21:40.127643] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4600 is same with the state(5) to be set 00:24:25.753 [2024-05-14 02:21:40.127655] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4600 is same with the state(5) to be set 00:24:25.753 [2024-05-14 02:21:40.127664] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4600 is same with the state(5) to be set 00:24:25.754 [2024-05-14 02:21:40.127672] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4600 is same with the state(5) to be set 00:24:25.754 [2024-05-14 02:21:40.127680] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4600 is same with the state(5) to be set 00:24:25.754 [2024-05-14 02:21:40.127688] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4600 is same with the state(5) to be set 00:24:25.754 [2024-05-14 02:21:40.127696] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4600 is same with the state(5) to be set 00:24:25.754 [2024-05-14 02:21:40.127704] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4600 is same with the state(5) to be set 00:24:25.754 [2024-05-14 02:21:40.127729] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4600 is same with the state(5) to be set 00:24:25.754 [2024-05-14 02:21:40.127754] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4600 is same with the state(5) to be set 00:24:25.754 [2024-05-14 02:21:40.127762] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4600 is same with the state(5) to be set 00:24:25.754 [2024-05-14 02:21:40.127771] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4600 is same with the state(5) to be set 00:24:25.754 [2024-05-14 02:21:40.127779] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4600 is same with the state(5) to be set 00:24:25.754 [2024-05-14 02:21:40.127788] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4600 is same with the state(5) to be set 00:24:25.754 [2024-05-14 02:21:40.127809] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4600 is same with the state(5) to be set 00:24:25.754 [2024-05-14 02:21:40.127818] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4600 is same with the state(5) to be set 00:24:25.754 [2024-05-14 02:21:40.128001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:108664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.754 [2024-05-14 02:21:40.128032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.754 [2024-05-14 02:21:40.128054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:109016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.754 [2024-05-14 02:21:40.128065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.754 [2024-05-14 02:21:40.128077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:109048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.754 [2024-05-14 02:21:40.128087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.754 [2024-05-14 02:21:40.128099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:109056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.754 [2024-05-14 02:21:40.128109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.754 [2024-05-14 02:21:40.128121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:109112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.754 [2024-05-14 02:21:40.128131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.754 [2024-05-14 02:21:40.128144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:109120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.754 [2024-05-14 02:21:40.128154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.754 [2024-05-14 02:21:40.128165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:109136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.754 [2024-05-14 02:21:40.128190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.754 [2024-05-14 02:21:40.128200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:109144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.754 [2024-05-14 02:21:40.128210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.754 [2024-05-14 02:21:40.128222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:109152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.754 [2024-05-14 02:21:40.128231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.754 [2024-05-14 02:21:40.128242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:109160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.754 [2024-05-14 02:21:40.128251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.754 [2024-05-14 02:21:40.128277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:109184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.754 [2024-05-14 02:21:40.128303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.754 [2024-05-14 02:21:40.128315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:109192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.754 [2024-05-14 02:21:40.128325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.754 [2024-05-14 02:21:40.128336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:109208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.754 [2024-05-14 02:21:40.128346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.754 [2024-05-14 02:21:40.128358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:109216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.754 [2024-05-14 02:21:40.128367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.754 [2024-05-14 02:21:40.128379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:109224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.754 [2024-05-14 02:21:40.128388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.754 [2024-05-14 02:21:40.128400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:109232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.754 [2024-05-14 02:21:40.128410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.754 [2024-05-14 02:21:40.128421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:109240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.754 [2024-05-14 02:21:40.128433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.754 [2024-05-14 02:21:40.128445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:109248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.754 [2024-05-14 02:21:40.128455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.754 [2024-05-14 02:21:40.128467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:109256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.754 [2024-05-14 02:21:40.128477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.754 [2024-05-14 02:21:40.128489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:109264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.754 [2024-05-14 02:21:40.128499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.754 [2024-05-14 02:21:40.128510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:109272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.754 [2024-05-14 02:21:40.128519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.754 [2024-05-14 02:21:40.128531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:109280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.754 [2024-05-14 02:21:40.128541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.754 [2024-05-14 02:21:40.128552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:109288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.754 [2024-05-14 02:21:40.128562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.754 [2024-05-14 02:21:40.128574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:109296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.754 [2024-05-14 02:21:40.128584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.754 [2024-05-14 02:21:40.128595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:109304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.754 [2024-05-14 02:21:40.128605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.754 [2024-05-14 02:21:40.128617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:109312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.754 [2024-05-14 02:21:40.128627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.754 [2024-05-14 02:21:40.128638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:109320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.754 [2024-05-14 02:21:40.128648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.754 [2024-05-14 02:21:40.128690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:109328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.754 [2024-05-14 02:21:40.128715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.754 [2024-05-14 02:21:40.128727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:109336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.754 [2024-05-14 02:21:40.128736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.754 [2024-05-14 02:21:40.128747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:109344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.754 [2024-05-14 02:21:40.128756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.754 [2024-05-14 02:21:40.128771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:109352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.754 [2024-05-14 02:21:40.128805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.754 [2024-05-14 02:21:40.128822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:109360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.754 [2024-05-14 02:21:40.128833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.754 [2024-05-14 02:21:40.128846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:109368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.754 [2024-05-14 02:21:40.128856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.754 [2024-05-14 02:21:40.128880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:109376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.754 [2024-05-14 02:21:40.128893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.755 [2024-05-14 02:21:40.128905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:109384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.755 [2024-05-14 02:21:40.128915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.755 [2024-05-14 02:21:40.128927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:109392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.755 [2024-05-14 02:21:40.128937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.755 [2024-05-14 02:21:40.128948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:109400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.755 [2024-05-14 02:21:40.128958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.755 [2024-05-14 02:21:40.128969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:109408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.755 [2024-05-14 02:21:40.128979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.755 [2024-05-14 02:21:40.128990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:109416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.755 [2024-05-14 02:21:40.129000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.755 [2024-05-14 02:21:40.129011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:109424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.755 [2024-05-14 02:21:40.129021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.755 [2024-05-14 02:21:40.129033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:109432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.755 [2024-05-14 02:21:40.129043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.755 [2024-05-14 02:21:40.129054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:109440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.755 [2024-05-14 02:21:40.129064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.755 [2024-05-14 02:21:40.129076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:109448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.755 [2024-05-14 02:21:40.129086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.755 [2024-05-14 02:21:40.129098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:108688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.755 [2024-05-14 02:21:40.129107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.755 [2024-05-14 02:21:40.129118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:108696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.755 [2024-05-14 02:21:40.129128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.755 [2024-05-14 02:21:40.129140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:108704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.755 [2024-05-14 02:21:40.129149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.755 [2024-05-14 02:21:40.129161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:108728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.755 [2024-05-14 02:21:40.129170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.755 [2024-05-14 02:21:40.129182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:108744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.755 [2024-05-14 02:21:40.129192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.755 [2024-05-14 02:21:40.129203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:108784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.755 [2024-05-14 02:21:40.129213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.755 [2024-05-14 02:21:40.129225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:108800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.755 [2024-05-14 02:21:40.129240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.755 [2024-05-14 02:21:40.129252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:108808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.755 [2024-05-14 02:21:40.129262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.755 [2024-05-14 02:21:40.129273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:108840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.755 [2024-05-14 02:21:40.129283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.755 [2024-05-14 02:21:40.129294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:108880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.755 [2024-05-14 02:21:40.129304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.755 [2024-05-14 02:21:40.129316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:108888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.755 [2024-05-14 02:21:40.129326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.755 [2024-05-14 02:21:40.129337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:108920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.755 [2024-05-14 02:21:40.129347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.755 [2024-05-14 02:21:40.129359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:108928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.755 [2024-05-14 02:21:40.129369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.755 [2024-05-14 02:21:40.129380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:108936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.755 [2024-05-14 02:21:40.129390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.755 [2024-05-14 02:21:40.129402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.755 [2024-05-14 02:21:40.129411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.755 [2024-05-14 02:21:40.129423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:108952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.755 [2024-05-14 02:21:40.129432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.755 [2024-05-14 02:21:40.129444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:108968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.755 [2024-05-14 02:21:40.129453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.755 [2024-05-14 02:21:40.129465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:108976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.755 [2024-05-14 02:21:40.129474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.755 [2024-05-14 02:21:40.129486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:108992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.755 [2024-05-14 02:21:40.129495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.755 [2024-05-14 02:21:40.129507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:109456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.755 [2024-05-14 02:21:40.129516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.755 [2024-05-14 02:21:40.129528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:109464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.755 [2024-05-14 02:21:40.129537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.755 [2024-05-14 02:21:40.129549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:109472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.755 [2024-05-14 02:21:40.129559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.755 [2024-05-14 02:21:40.129570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:109480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.755 [2024-05-14 02:21:40.129580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.755 [2024-05-14 02:21:40.129591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:109488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.755 [2024-05-14 02:21:40.129601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.755 [2024-05-14 02:21:40.129612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:109496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.755 [2024-05-14 02:21:40.129622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.755 [2024-05-14 02:21:40.129634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:109504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.755 [2024-05-14 02:21:40.129644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.755 [2024-05-14 02:21:40.129655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:109512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.755 [2024-05-14 02:21:40.129665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.755 [2024-05-14 02:21:40.129677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:109520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.755 [2024-05-14 02:21:40.129687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.755 [2024-05-14 02:21:40.129710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:109528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.755 [2024-05-14 02:21:40.129719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.755 [2024-05-14 02:21:40.129730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:109536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.755 [2024-05-14 02:21:40.129739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.755 [2024-05-14 02:21:40.129751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:109544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.755 [2024-05-14 02:21:40.129761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.755 [2024-05-14 02:21:40.129789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:109552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.755 [2024-05-14 02:21:40.129819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.756 [2024-05-14 02:21:40.129831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:109560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.756 [2024-05-14 02:21:40.129840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.756 [2024-05-14 02:21:40.129852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:109568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.756 [2024-05-14 02:21:40.129862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.756 [2024-05-14 02:21:40.129874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:109576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.756 [2024-05-14 02:21:40.129883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.756 [2024-05-14 02:21:40.129895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:109584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.756 [2024-05-14 02:21:40.129904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.756 [2024-05-14 02:21:40.129917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:109592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.756 [2024-05-14 02:21:40.129934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.756 [2024-05-14 02:21:40.129946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:109600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.756 [2024-05-14 02:21:40.129956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.756 [2024-05-14 02:21:40.129968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:109608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.756 [2024-05-14 02:21:40.129989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.756 [2024-05-14 02:21:40.130003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:109616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.756 [2024-05-14 02:21:40.130012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.756 [2024-05-14 02:21:40.130024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:109624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.756 [2024-05-14 02:21:40.130033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.756 [2024-05-14 02:21:40.130045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:109632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.756 [2024-05-14 02:21:40.130055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.756 [2024-05-14 02:21:40.130066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:109640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.756 [2024-05-14 02:21:40.130076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.756 [2024-05-14 02:21:40.130088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:109648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.756 [2024-05-14 02:21:40.130103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.756 [2024-05-14 02:21:40.130115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:109656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.756 [2024-05-14 02:21:40.130125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.756 [2024-05-14 02:21:40.130136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:109664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.756 [2024-05-14 02:21:40.130146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.756 [2024-05-14 02:21:40.130158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:109000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.756 [2024-05-14 02:21:40.130167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.756 [2024-05-14 02:21:40.130179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:109008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.756 [2024-05-14 02:21:40.130189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.756 [2024-05-14 02:21:40.130200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:109024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.756 [2024-05-14 02:21:40.130210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.756 [2024-05-14 02:21:40.130221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:109032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.756 [2024-05-14 02:21:40.130231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.756 [2024-05-14 02:21:40.130243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:109040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.756 [2024-05-14 02:21:40.130253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.756 [2024-05-14 02:21:40.130264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:109064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.756 [2024-05-14 02:21:40.130274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.756 [2024-05-14 02:21:40.130285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:109072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.756 [2024-05-14 02:21:40.130295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.756 [2024-05-14 02:21:40.130306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:109672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.756 [2024-05-14 02:21:40.130316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.756 [2024-05-14 02:21:40.130328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:109680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.756 [2024-05-14 02:21:40.130338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.756 [2024-05-14 02:21:40.130349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:109688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.756 [2024-05-14 02:21:40.130359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.756 [2024-05-14 02:21:40.130371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:109696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.756 [2024-05-14 02:21:40.130381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.756 [2024-05-14 02:21:40.130397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:109704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.756 [2024-05-14 02:21:40.130407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.756 [2024-05-14 02:21:40.130418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:109712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.756 [2024-05-14 02:21:40.130427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.756 [2024-05-14 02:21:40.130439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:109720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.756 [2024-05-14 02:21:40.130448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.756 [2024-05-14 02:21:40.130459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:109728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.756 [2024-05-14 02:21:40.130469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.756 [2024-05-14 02:21:40.130480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:109736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.756 [2024-05-14 02:21:40.130490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.756 [2024-05-14 02:21:40.130501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:109744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.756 [2024-05-14 02:21:40.130511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.756 [2024-05-14 02:21:40.130522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:109752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.756 [2024-05-14 02:21:40.130531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.756 [2024-05-14 02:21:40.130543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:109760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.756 [2024-05-14 02:21:40.130552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.756 [2024-05-14 02:21:40.130564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:109768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.756 [2024-05-14 02:21:40.130573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.756 [2024-05-14 02:21:40.130584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:109776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.756 [2024-05-14 02:21:40.130593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.756 [2024-05-14 02:21:40.130605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:109784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.756 [2024-05-14 02:21:40.130614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.756 [2024-05-14 02:21:40.130626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:109792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.756 [2024-05-14 02:21:40.130636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.756 [2024-05-14 02:21:40.130648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:109800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.756 [2024-05-14 02:21:40.130672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.756 [2024-05-14 02:21:40.130683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:109808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.756 [2024-05-14 02:21:40.130692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.756 [2024-05-14 02:21:40.130703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:109816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.756 [2024-05-14 02:21:40.130729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.756 [2024-05-14 02:21:40.130741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:109824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.756 [2024-05-14 02:21:40.130750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.757 [2024-05-14 02:21:40.130765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:109832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.757 [2024-05-14 02:21:40.130782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.757 [2024-05-14 02:21:40.130817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:109840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.757 [2024-05-14 02:21:40.130835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.757 [2024-05-14 02:21:40.130847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:109848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.757 [2024-05-14 02:21:40.130857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.757 [2024-05-14 02:21:40.130868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:109856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.757 [2024-05-14 02:21:40.130878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.757 [2024-05-14 02:21:40.130889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:109080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.757 [2024-05-14 02:21:40.130899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.757 [2024-05-14 02:21:40.130910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:109088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.757 [2024-05-14 02:21:40.130920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.757 [2024-05-14 02:21:40.130931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:109096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.757 [2024-05-14 02:21:40.130941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.757 [2024-05-14 02:21:40.130952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:109104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.757 [2024-05-14 02:21:40.130962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.757 [2024-05-14 02:21:40.130975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:109128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.757 [2024-05-14 02:21:40.130984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.757 [2024-05-14 02:21:40.130995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:109168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.757 [2024-05-14 02:21:40.131005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.757 [2024-05-14 02:21:40.131016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:109176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.757 [2024-05-14 02:21:40.131026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.757 [2024-05-14 02:21:40.131037] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2037420 is same with the state(5) to be set 00:24:25.757 [2024-05-14 02:21:40.131049] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:25.757 [2024-05-14 02:21:40.131057] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:25.757 [2024-05-14 02:21:40.131066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109200 len:8 PRP1 0x0 PRP2 0x0 00:24:25.757 [2024-05-14 02:21:40.131075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.757 [2024-05-14 02:21:40.131118] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2037420 was disconnected and freed. reset controller. 00:24:25.757 [2024-05-14 02:21:40.131244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:25.757 [2024-05-14 02:21:40.131261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.757 [2024-05-14 02:21:40.131272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:25.757 [2024-05-14 02:21:40.131282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.757 [2024-05-14 02:21:40.131294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:25.757 [2024-05-14 02:21:40.131304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.757 [2024-05-14 02:21:40.131314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:25.757 [2024-05-14 02:21:40.131324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.757 [2024-05-14 02:21:40.131333] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff0170 is same with the state(5) to be set 00:24:25.757 [2024-05-14 02:21:40.131555] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.757 [2024-05-14 02:21:40.131576] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ff0170 (9): Bad file descriptor 00:24:25.757 [2024-05-14 02:21:40.131672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.757 [2024-05-14 02:21:40.131722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.757 [2024-05-14 02:21:40.131740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff0170 with addr=10.0.0.2, port=4420 00:24:25.757 [2024-05-14 02:21:40.131750] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff0170 is same with the state(5) to be set 00:24:25.757 [2024-05-14 02:21:40.131769] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ff0170 (9): Bad file descriptor 00:24:25.757 [2024-05-14 02:21:40.131786] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.757 [2024-05-14 02:21:40.131821] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.757 [2024-05-14 02:21:40.131833] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.757 [2024-05-14 02:21:40.131854] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.757 [2024-05-14 02:21:40.131865] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.757 02:21:40 -- host/timeout.sh@90 -- # sleep 1 00:24:26.694 [2024-05-14 02:21:41.131998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.694 [2024-05-14 02:21:41.132108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.694 [2024-05-14 02:21:41.132130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff0170 with addr=10.0.0.2, port=4420 00:24:26.694 [2024-05-14 02:21:41.132145] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff0170 is same with the state(5) to be set 00:24:26.694 [2024-05-14 02:21:41.132171] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ff0170 (9): Bad file descriptor 00:24:26.694 [2024-05-14 02:21:41.132190] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.694 [2024-05-14 02:21:41.132200] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.694 [2024-05-14 02:21:41.132210] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.694 [2024-05-14 02:21:41.132251] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.694 [2024-05-14 02:21:41.132263] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.694 02:21:41 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:26.952 [2024-05-14 02:21:41.393844] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:26.952 02:21:41 -- host/timeout.sh@92 -- # wait 87725 00:24:27.886 [2024-05-14 02:21:42.150427] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:34.448 00:24:34.448 Latency(us) 00:24:34.448 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:34.448 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:34.448 Verification LBA range: start 0x0 length 0x4000 00:24:34.448 NVMe0n1 : 10.01 8507.11 33.23 0.00 0.00 15020.95 1467.11 3019898.88 00:24:34.448 =================================================================================================================== 00:24:34.448 Total : 8507.11 33.23 0.00 0.00 15020.95 1467.11 3019898.88 00:24:34.448 0 00:24:34.448 02:21:49 -- host/timeout.sh@97 -- # rpc_pid=87842 00:24:34.448 02:21:49 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:34.448 02:21:49 -- host/timeout.sh@98 -- # sleep 1 00:24:34.707 Running I/O for 10 seconds... 00:24:35.643 02:21:50 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:35.905 [2024-05-14 02:21:50.269804] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22010b0 is same with the state(5) to be set 00:24:35.905 [2024-05-14 02:21:50.269878] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22010b0 is same with the state(5) to be set 00:24:35.905 [2024-05-14 02:21:50.269893] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22010b0 is same with the state(5) to be set 00:24:35.905 [2024-05-14 02:21:50.269902] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22010b0 is same with the state(5) to be set 00:24:35.905 [2024-05-14 02:21:50.269911] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22010b0 is same with the state(5) to be set 00:24:35.905 [2024-05-14 02:21:50.269920] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22010b0 is same with the state(5) to be set 00:24:35.905 [2024-05-14 02:21:50.269928] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22010b0 is same with the state(5) to be set 00:24:35.905 [2024-05-14 02:21:50.269937] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22010b0 is same with the state(5) to be set 00:24:35.905 [2024-05-14 02:21:50.269945] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22010b0 is same with the state(5) to be set 00:24:35.905 [2024-05-14 02:21:50.269954] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22010b0 is same with the state(5) to be set 00:24:35.905 [2024-05-14 02:21:50.269962] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22010b0 is same with the state(5) to be set 00:24:35.905 [2024-05-14 02:21:50.269970] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22010b0 is same with the state(5) to be set 00:24:35.905 [2024-05-14 02:21:50.269979] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22010b0 is same with the state(5) to be set 00:24:35.905 [2024-05-14 02:21:50.269999] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22010b0 is same with the state(5) to be set 00:24:35.905 [2024-05-14 02:21:50.270008] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22010b0 is same with the state(5) to be set 00:24:35.905 [2024-05-14 02:21:50.270302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:109088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.905 [2024-05-14 02:21:50.270334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.905 [2024-05-14 02:21:50.270355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:109488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.905 [2024-05-14 02:21:50.270366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.905 [2024-05-14 02:21:50.270378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:109496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.905 [2024-05-14 02:21:50.270387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.905 [2024-05-14 02:21:50.270400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:109544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.905 [2024-05-14 02:21:50.270409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.905 [2024-05-14 02:21:50.270421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:109568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.905 [2024-05-14 02:21:50.270430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.905 [2024-05-14 02:21:50.270442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:109584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.905 [2024-05-14 02:21:50.270452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.905 [2024-05-14 02:21:50.270463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:109600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.905 [2024-05-14 02:21:50.270473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.905 [2024-05-14 02:21:50.270484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:109608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.905 [2024-05-14 02:21:50.270494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.905 [2024-05-14 02:21:50.270506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:109632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.905 [2024-05-14 02:21:50.270516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.905 [2024-05-14 02:21:50.270527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:109640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.905 [2024-05-14 02:21:50.270537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.905 [2024-05-14 02:21:50.270548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:109648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.905 [2024-05-14 02:21:50.270557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.906 [2024-05-14 02:21:50.270569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:109656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.906 [2024-05-14 02:21:50.270579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.906 [2024-05-14 02:21:50.270590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:109664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.906 [2024-05-14 02:21:50.270599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.906 [2024-05-14 02:21:50.270611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:109672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.906 [2024-05-14 02:21:50.270620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.906 [2024-05-14 02:21:50.270632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:109680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.906 [2024-05-14 02:21:50.270641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.906 [2024-05-14 02:21:50.270652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:109688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.906 [2024-05-14 02:21:50.270662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.906 [2024-05-14 02:21:50.270673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:109696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.906 [2024-05-14 02:21:50.270684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.906 [2024-05-14 02:21:50.270696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:109704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.906 [2024-05-14 02:21:50.270706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.906 [2024-05-14 02:21:50.270717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:109712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.906 [2024-05-14 02:21:50.270727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.906 [2024-05-14 02:21:50.270739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:109720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.906 [2024-05-14 02:21:50.270748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.906 [2024-05-14 02:21:50.270779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:109728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.906 [2024-05-14 02:21:50.270800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.906 [2024-05-14 02:21:50.270814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:109736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.906 [2024-05-14 02:21:50.270824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.906 [2024-05-14 02:21:50.270835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:109744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.906 [2024-05-14 02:21:50.270845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.906 [2024-05-14 02:21:50.270856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:109752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.906 [2024-05-14 02:21:50.270866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.906 [2024-05-14 02:21:50.270879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:109760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.906 [2024-05-14 02:21:50.270891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.906 [2024-05-14 02:21:50.270911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:109768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.906 [2024-05-14 02:21:50.270926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.906 [2024-05-14 02:21:50.270943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:109776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.906 [2024-05-14 02:21:50.270959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.906 [2024-05-14 02:21:50.270975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:109784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.906 [2024-05-14 02:21:50.270985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.906 [2024-05-14 02:21:50.270996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:109792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.906 [2024-05-14 02:21:50.271006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.906 [2024-05-14 02:21:50.271017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:109800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.906 [2024-05-14 02:21:50.271027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.906 [2024-05-14 02:21:50.271038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:109808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.906 [2024-05-14 02:21:50.271048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.906 [2024-05-14 02:21:50.271060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:109816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.906 [2024-05-14 02:21:50.271069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.906 [2024-05-14 02:21:50.271081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:109824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.906 [2024-05-14 02:21:50.271091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.906 [2024-05-14 02:21:50.271103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:109832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.906 [2024-05-14 02:21:50.271113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.906 [2024-05-14 02:21:50.271124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:109840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.906 [2024-05-14 02:21:50.271134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.906 [2024-05-14 02:21:50.271145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:109848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.906 [2024-05-14 02:21:50.271155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.906 [2024-05-14 02:21:50.271167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:109856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.906 [2024-05-14 02:21:50.271177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.906 [2024-05-14 02:21:50.271189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:109864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.906 [2024-05-14 02:21:50.271198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.906 [2024-05-14 02:21:50.271210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:109872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.906 [2024-05-14 02:21:50.271219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.906 [2024-05-14 02:21:50.271231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:109880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.906 [2024-05-14 02:21:50.271241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.906 [2024-05-14 02:21:50.271253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:109888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.906 [2024-05-14 02:21:50.271263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.906 [2024-05-14 02:21:50.271274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:109096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.906 [2024-05-14 02:21:50.271283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.906 [2024-05-14 02:21:50.271295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:109104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.906 [2024-05-14 02:21:50.271305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.906 [2024-05-14 02:21:50.271316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:109128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.906 [2024-05-14 02:21:50.271325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.906 [2024-05-14 02:21:50.271336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:109168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.906 [2024-05-14 02:21:50.271346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.906 [2024-05-14 02:21:50.271358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:109176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.906 [2024-05-14 02:21:50.271367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.906 [2024-05-14 02:21:50.271378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:109200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.906 [2024-05-14 02:21:50.271389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.906 [2024-05-14 02:21:50.271400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:109256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.907 [2024-05-14 02:21:50.271410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.907 [2024-05-14 02:21:50.271421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:109264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.907 [2024-05-14 02:21:50.271431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.907 [2024-05-14 02:21:50.271442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:109272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.907 [2024-05-14 02:21:50.271452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.907 [2024-05-14 02:21:50.271464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:109288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.907 [2024-05-14 02:21:50.271473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.907 [2024-05-14 02:21:50.271484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:109296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.907 [2024-05-14 02:21:50.271494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.907 [2024-05-14 02:21:50.271505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:109304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.907 [2024-05-14 02:21:50.271516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.907 [2024-05-14 02:21:50.271527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:109320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.907 [2024-05-14 02:21:50.271537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.907 [2024-05-14 02:21:50.271548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:109352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.907 [2024-05-14 02:21:50.271557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.907 [2024-05-14 02:21:50.271568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:109360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.907 [2024-05-14 02:21:50.271578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.907 [2024-05-14 02:21:50.271590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:109392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.907 [2024-05-14 02:21:50.271600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.907 [2024-05-14 02:21:50.271612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:109400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.907 [2024-05-14 02:21:50.271621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.907 [2024-05-14 02:21:50.271633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:109416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.907 [2024-05-14 02:21:50.271642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.907 [2024-05-14 02:21:50.271654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:109424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.907 [2024-05-14 02:21:50.271663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.907 [2024-05-14 02:21:50.271674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:109432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.907 [2024-05-14 02:21:50.271684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.907 [2024-05-14 02:21:50.271698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:109448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.907 [2024-05-14 02:21:50.271707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.907 [2024-05-14 02:21:50.271719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:109896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.907 [2024-05-14 02:21:50.271729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.907 [2024-05-14 02:21:50.271740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:109904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.907 [2024-05-14 02:21:50.271750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.907 [2024-05-14 02:21:50.271778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:109912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.907 [2024-05-14 02:21:50.271795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.907 [2024-05-14 02:21:50.271812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:109920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.907 [2024-05-14 02:21:50.271830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.907 [2024-05-14 02:21:50.271849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:109928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.907 [2024-05-14 02:21:50.271864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.907 [2024-05-14 02:21:50.271881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.907 [2024-05-14 02:21:50.271891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.907 [2024-05-14 02:21:50.271903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:109944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.907 [2024-05-14 02:21:50.271912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.907 [2024-05-14 02:21:50.271924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:109952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.907 [2024-05-14 02:21:50.271934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.907 [2024-05-14 02:21:50.271952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:109960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.907 [2024-05-14 02:21:50.271961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.907 [2024-05-14 02:21:50.271973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:109968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.907 [2024-05-14 02:21:50.271982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.907 [2024-05-14 02:21:50.271995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:109976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.907 [2024-05-14 02:21:50.272004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.907 [2024-05-14 02:21:50.272016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:109984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.907 [2024-05-14 02:21:50.272026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.907 [2024-05-14 02:21:50.272038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:109992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.907 [2024-05-14 02:21:50.272047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.907 [2024-05-14 02:21:50.272058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:110000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.907 [2024-05-14 02:21:50.272068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.907 [2024-05-14 02:21:50.272079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:110008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.907 [2024-05-14 02:21:50.272089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.907 [2024-05-14 02:21:50.272100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:110016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.907 [2024-05-14 02:21:50.272109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.907 [2024-05-14 02:21:50.272124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:110024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.907 [2024-05-14 02:21:50.272134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.907 [2024-05-14 02:21:50.272146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:110032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.907 [2024-05-14 02:21:50.272155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.907 [2024-05-14 02:21:50.272167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.909 [2024-05-14 02:21:50.272176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.909 [2024-05-14 02:21:50.272187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:110048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.909 [2024-05-14 02:21:50.272197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.909 [2024-05-14 02:21:50.272208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:110056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.909 [2024-05-14 02:21:50.272218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.909 [2024-05-14 02:21:50.272229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:110064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.909 [2024-05-14 02:21:50.272239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.909 [2024-05-14 02:21:50.272250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:110072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.909 [2024-05-14 02:21:50.272260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.909 [2024-05-14 02:21:50.272271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:110080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.909 [2024-05-14 02:21:50.272281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.909 [2024-05-14 02:21:50.272293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:110088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.909 [2024-05-14 02:21:50.272302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.909 [2024-05-14 02:21:50.272314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:110096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.909 [2024-05-14 02:21:50.272323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.909 [2024-05-14 02:21:50.272335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:110104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.909 [2024-05-14 02:21:50.272345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.909 [2024-05-14 02:21:50.272356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:110112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.909 [2024-05-14 02:21:50.272366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.909 [2024-05-14 02:21:50.272377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:110120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.909 [2024-05-14 02:21:50.272388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.909 [2024-05-14 02:21:50.272399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:110128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.909 [2024-05-14 02:21:50.272409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.909 [2024-05-14 02:21:50.272420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:110136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.909 [2024-05-14 02:21:50.272430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.909 [2024-05-14 02:21:50.272441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:110144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.909 [2024-05-14 02:21:50.272451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.909 [2024-05-14 02:21:50.272462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:110152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.909 [2024-05-14 02:21:50.272472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.909 [2024-05-14 02:21:50.272483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:110160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.909 [2024-05-14 02:21:50.272493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.909 [2024-05-14 02:21:50.272504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:110168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.909 [2024-05-14 02:21:50.272515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.909 [2024-05-14 02:21:50.272526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:110176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.909 [2024-05-14 02:21:50.272536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.909 [2024-05-14 02:21:50.272547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:110184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.909 [2024-05-14 02:21:50.272556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.909 [2024-05-14 02:21:50.272568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:110192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.909 [2024-05-14 02:21:50.272578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.909 [2024-05-14 02:21:50.272589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:110200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.909 [2024-05-14 02:21:50.272604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.909 [2024-05-14 02:21:50.272615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:110208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.909 [2024-05-14 02:21:50.272625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.909 [2024-05-14 02:21:50.272637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:110216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.909 [2024-05-14 02:21:50.272646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.909 [2024-05-14 02:21:50.272658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:110224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.909 [2024-05-14 02:21:50.272667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.909 [2024-05-14 02:21:50.272679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:110232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.909 [2024-05-14 02:21:50.272688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.909 [2024-05-14 02:21:50.272700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:110240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.909 [2024-05-14 02:21:50.272709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.909 [2024-05-14 02:21:50.272720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:110248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.909 [2024-05-14 02:21:50.272730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.909 [2024-05-14 02:21:50.272741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:110256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.909 [2024-05-14 02:21:50.272751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.909 [2024-05-14 02:21:50.272778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:110264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.909 [2024-05-14 02:21:50.272798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.909 [2024-05-14 02:21:50.272818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:110272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.909 [2024-05-14 02:21:50.272834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.909 [2024-05-14 02:21:50.272851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:110280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.909 [2024-05-14 02:21:50.272867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.909 [2024-05-14 02:21:50.272880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:110288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.909 [2024-05-14 02:21:50.272891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.909 [2024-05-14 02:21:50.272903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:110296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.909 [2024-05-14 02:21:50.272916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.909 [2024-05-14 02:21:50.272929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:109456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.909 [2024-05-14 02:21:50.272939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.909 [2024-05-14 02:21:50.272950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:109464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.909 [2024-05-14 02:21:50.272960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.909 [2024-05-14 02:21:50.272971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:109472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.910 [2024-05-14 02:21:50.272981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.910 [2024-05-14 02:21:50.272992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:109480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.910 [2024-05-14 02:21:50.273005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.910 [2024-05-14 02:21:50.273016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:109504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.910 [2024-05-14 02:21:50.273026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.910 [2024-05-14 02:21:50.273037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:109512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.910 [2024-05-14 02:21:50.273046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.910 [2024-05-14 02:21:50.273058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:109520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.910 [2024-05-14 02:21:50.273067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.910 [2024-05-14 02:21:50.273079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:109528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.910 [2024-05-14 02:21:50.273088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.910 [2024-05-14 02:21:50.273104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:109536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.910 [2024-05-14 02:21:50.273113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.910 [2024-05-14 02:21:50.273125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:109552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.910 [2024-05-14 02:21:50.273134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.910 [2024-05-14 02:21:50.273146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:109560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.910 [2024-05-14 02:21:50.273156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.910 [2024-05-14 02:21:50.273167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:109576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.910 [2024-05-14 02:21:50.273177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.910 [2024-05-14 02:21:50.273188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:109592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.910 [2024-05-14 02:21:50.273197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.910 [2024-05-14 02:21:50.273209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:109616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.910 [2024-05-14 02:21:50.273218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.910 [2024-05-14 02:21:50.273245] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:35.910 [2024-05-14 02:21:50.273256] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:35.910 [2024-05-14 02:21:50.273264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109624 len:8 PRP1 0x0 PRP2 0x0 00:24:35.910 [2024-05-14 02:21:50.273276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.910 [2024-05-14 02:21:50.273321] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20573f0 was disconnected and freed. reset controller. 00:24:35.910 [2024-05-14 02:21:50.273567] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.910 [2024-05-14 02:21:50.273654] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ff0170 (9): Bad file descriptor 00:24:35.910 [2024-05-14 02:21:50.273755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.910 [2024-05-14 02:21:50.273837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.910 [2024-05-14 02:21:50.273856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff0170 with addr=10.0.0.2, port=4420 00:24:35.910 [2024-05-14 02:21:50.273872] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff0170 is same with the state(5) to be set 00:24:35.910 [2024-05-14 02:21:50.273893] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ff0170 (9): Bad file descriptor 00:24:35.910 [2024-05-14 02:21:50.273909] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.910 [2024-05-14 02:21:50.273919] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.910 [2024-05-14 02:21:50.273928] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.910 [2024-05-14 02:21:50.273949] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.910 [2024-05-14 02:21:50.273959] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.910 02:21:50 -- host/timeout.sh@101 -- # sleep 3 00:24:36.870 [2024-05-14 02:21:51.274090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.870 [2024-05-14 02:21:51.274192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.870 [2024-05-14 02:21:51.274213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff0170 with addr=10.0.0.2, port=4420 00:24:36.870 [2024-05-14 02:21:51.274227] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff0170 is same with the state(5) to be set 00:24:36.870 [2024-05-14 02:21:51.274254] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ff0170 (9): Bad file descriptor 00:24:36.870 [2024-05-14 02:21:51.274273] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.870 [2024-05-14 02:21:51.274283] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.871 [2024-05-14 02:21:51.274294] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.871 [2024-05-14 02:21:51.274335] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.871 [2024-05-14 02:21:51.274361] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.805 [2024-05-14 02:21:52.274482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.805 [2024-05-14 02:21:52.274601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.805 [2024-05-14 02:21:52.274621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff0170 with addr=10.0.0.2, port=4420 00:24:37.805 [2024-05-14 02:21:52.274635] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff0170 is same with the state(5) to be set 00:24:37.805 [2024-05-14 02:21:52.274675] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ff0170 (9): Bad file descriptor 00:24:37.805 [2024-05-14 02:21:52.274710] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.805 [2024-05-14 02:21:52.274720] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.805 [2024-05-14 02:21:52.274730] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.805 [2024-05-14 02:21:52.274757] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.805 [2024-05-14 02:21:52.274769] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.741 [2024-05-14 02:21:53.276975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.741 [2024-05-14 02:21:53.277084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.741 [2024-05-14 02:21:53.277106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff0170 with addr=10.0.0.2, port=4420 00:24:38.741 [2024-05-14 02:21:53.277119] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff0170 is same with the state(5) to be set 00:24:38.741 [2024-05-14 02:21:53.277297] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ff0170 (9): Bad file descriptor 00:24:38.741 [2024-05-14 02:21:53.277458] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.741 [2024-05-14 02:21:53.277471] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.741 [2024-05-14 02:21:53.277481] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.741 [2024-05-14 02:21:53.280152] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.741 [2024-05-14 02:21:53.280188] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.741 02:21:53 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:38.998 [2024-05-14 02:21:53.543716] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:38.998 02:21:53 -- host/timeout.sh@103 -- # wait 87842 00:24:39.931 [2024-05-14 02:21:54.311401] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:45.235 00:24:45.235 Latency(us) 00:24:45.235 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:45.235 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:45.235 Verification LBA range: start 0x0 length 0x4000 00:24:45.235 NVMe0n1 : 10.01 7230.58 28.24 5205.87 0.00 10275.72 848.99 3019898.88 00:24:45.235 =================================================================================================================== 00:24:45.235 Total : 7230.58 28.24 5205.87 0.00 10275.72 0.00 3019898.88 00:24:45.235 0 00:24:45.235 02:21:59 -- host/timeout.sh@105 -- # killprocess 87677 00:24:45.235 02:21:59 -- common/autotest_common.sh@926 -- # '[' -z 87677 ']' 00:24:45.235 02:21:59 -- common/autotest_common.sh@930 -- # kill -0 87677 00:24:45.235 02:21:59 -- common/autotest_common.sh@931 -- # uname 00:24:45.235 02:21:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:45.235 02:21:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 87677 00:24:45.235 killing process with pid 87677 00:24:45.235 Received shutdown signal, test time was about 10.000000 seconds 00:24:45.235 00:24:45.235 Latency(us) 00:24:45.235 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:45.235 =================================================================================================================== 00:24:45.235 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:45.235 02:21:59 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:24:45.235 02:21:59 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:24:45.235 02:21:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 87677' 00:24:45.235 02:21:59 -- common/autotest_common.sh@945 -- # kill 87677 00:24:45.235 02:21:59 -- common/autotest_common.sh@950 -- # wait 87677 00:24:45.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:45.235 02:21:59 -- host/timeout.sh@110 -- # bdevperf_pid=87967 00:24:45.235 02:21:59 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:24:45.235 02:21:59 -- host/timeout.sh@112 -- # waitforlisten 87967 /var/tmp/bdevperf.sock 00:24:45.235 02:21:59 -- common/autotest_common.sh@819 -- # '[' -z 87967 ']' 00:24:45.235 02:21:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:45.235 02:21:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:45.235 02:21:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:45.235 02:21:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:45.235 02:21:59 -- common/autotest_common.sh@10 -- # set +x 00:24:45.235 [2024-05-14 02:21:59.445150] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:45.235 [2024-05-14 02:21:59.445452] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87967 ] 00:24:45.235 [2024-05-14 02:21:59.580844] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:45.235 [2024-05-14 02:21:59.645285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:46.168 02:22:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:46.168 02:22:00 -- common/autotest_common.sh@852 -- # return 0 00:24:46.168 02:22:00 -- host/timeout.sh@116 -- # dtrace_pid=87991 00:24:46.168 02:22:00 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 87967 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:24:46.168 02:22:00 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:24:46.168 02:22:00 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:24:46.735 NVMe0n1 00:24:46.735 02:22:01 -- host/timeout.sh@124 -- # rpc_pid=88044 00:24:46.735 02:22:01 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:46.735 02:22:01 -- host/timeout.sh@125 -- # sleep 1 00:24:46.735 Running I/O for 10 seconds... 00:24:47.670 02:22:02 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:47.931 [2024-05-14 02:22:02.318731] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22047e0 is same with the state(5) to be set 00:24:47.931 [2024-05-14 02:22:02.318794] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22047e0 is same with the state(5) to be set 00:24:47.931 [2024-05-14 02:22:02.318806] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22047e0 is same with the state(5) to be set 00:24:47.931 [2024-05-14 02:22:02.318815] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22047e0 is same with the state(5) to be set 00:24:47.931 [2024-05-14 02:22:02.318824] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22047e0 is same with the state(5) to be set 00:24:47.931 [2024-05-14 02:22:02.318833] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22047e0 is same with the state(5) to be set 00:24:47.931 [2024-05-14 02:22:02.318841] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22047e0 is same with the state(5) to be set 00:24:47.931 [2024-05-14 02:22:02.318850] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22047e0 is same with the state(5) to be set 00:24:47.931 [2024-05-14 02:22:02.318858] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22047e0 is same with the state(5) to be set 00:24:47.931 [2024-05-14 02:22:02.318866] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22047e0 is same with the state(5) to be set 00:24:47.931 [2024-05-14 02:22:02.318875] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22047e0 is same with the state(5) to be set 00:24:47.932 [2024-05-14 02:22:02.318883] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22047e0 is same with the state(5) to be set 00:24:47.932 [2024-05-14 02:22:02.318892] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22047e0 is same with the state(5) to be set 00:24:47.932 [2024-05-14 02:22:02.318900] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22047e0 is same with the state(5) to be set 00:24:47.932 [2024-05-14 02:22:02.318908] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22047e0 is same with the state(5) to be set 00:24:47.932 [2024-05-14 02:22:02.318916] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22047e0 is same with the state(5) to be set 00:24:47.932 [2024-05-14 02:22:02.318924] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22047e0 is same with the state(5) to be set 00:24:47.932 [2024-05-14 02:22:02.318932] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22047e0 is same with the state(5) to be set 00:24:47.932 [2024-05-14 02:22:02.318941] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22047e0 is same with the state(5) to be set 00:24:47.932 [2024-05-14 02:22:02.318949] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22047e0 is same with the state(5) to be set 00:24:47.932 [2024-05-14 02:22:02.318958] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22047e0 is same with the state(5) to be set 00:24:47.932 [2024-05-14 02:22:02.318966] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22047e0 is same with the state(5) to be set 00:24:47.932 [2024-05-14 02:22:02.318974] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22047e0 is same with the state(5) to be set 00:24:47.932 [2024-05-14 02:22:02.318982] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22047e0 is same with the state(5) to be set 00:24:47.932 [2024-05-14 02:22:02.319005] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22047e0 is same with the state(5) to be set 00:24:47.932 [2024-05-14 02:22:02.319014] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22047e0 is same with the state(5) to be set 00:24:47.932 [2024-05-14 02:22:02.319022] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22047e0 is same with the state(5) to be set 00:24:47.932 [2024-05-14 02:22:02.319044] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22047e0 is same with the state(5) to be set 00:24:47.932 [2024-05-14 02:22:02.319052] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22047e0 is same with the state(5) to be set 00:24:47.932 [2024-05-14 02:22:02.319076] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22047e0 is same with the state(5) to be set 00:24:47.932 [2024-05-14 02:22:02.319102] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22047e0 is same with the state(5) to be set 00:24:47.932 [2024-05-14 02:22:02.319116] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22047e0 is same with the state(5) to be set 00:24:47.932 [2024-05-14 02:22:02.319129] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22047e0 is same with the state(5) to be set 00:24:47.932 [2024-05-14 02:22:02.319143] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22047e0 is same with the state(5) to be set 00:24:47.932 [2024-05-14 02:22:02.319156] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22047e0 is same with the state(5) to be set 00:24:47.932 [2024-05-14 02:22:02.319169] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22047e0 is same with the state(5) to be set 00:24:47.932 [2024-05-14 02:22:02.319182] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22047e0 is same with the state(5) to be set 00:24:47.932 [2024-05-14 02:22:02.319194] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22047e0 is same with the state(5) to be set 00:24:47.932 [2024-05-14 02:22:02.319203] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22047e0 is same with the state(5) to be set 00:24:47.932 [2024-05-14 02:22:02.319211] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22047e0 is same with the state(5) to be set 00:24:47.932 [2024-05-14 02:22:02.319220] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22047e0 is same with the state(5) to be set 00:24:47.932 [2024-05-14 02:22:02.319229] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22047e0 is same with the state(5) to be set 00:24:47.932 [2024-05-14 02:22:02.319237] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22047e0 is same with the state(5) to be set 00:24:47.932 [2024-05-14 02:22:02.319245] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22047e0 is same with the state(5) to be set 00:24:47.932 [2024-05-14 02:22:02.319254] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22047e0 is same with the state(5) to be set 00:24:47.932 [2024-05-14 02:22:02.319263] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22047e0 is same with the state(5) to be set 00:24:47.932 [2024-05-14 02:22:02.319272] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22047e0 is same with the state(5) to be set 00:24:47.932 [2024-05-14 02:22:02.319280] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22047e0 is same with the state(5) to be set 00:24:47.932 [2024-05-14 02:22:02.319453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.932 [2024-05-14 02:22:02.319482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.932 [2024-05-14 02:22:02.319503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:113008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.932 [2024-05-14 02:22:02.319514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.932 [2024-05-14 02:22:02.319526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:92432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.932 [2024-05-14 02:22:02.319536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.932 [2024-05-14 02:22:02.319547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:55488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.932 [2024-05-14 02:22:02.319557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.932 [2024-05-14 02:22:02.319568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.932 [2024-05-14 02:22:02.319578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.932 [2024-05-14 02:22:02.319589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:91344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.932 [2024-05-14 02:22:02.319598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.932 [2024-05-14 02:22:02.319610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:88600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.932 [2024-05-14 02:22:02.319619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.932 [2024-05-14 02:22:02.319630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:64456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.932 [2024-05-14 02:22:02.319640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.932 [2024-05-14 02:22:02.319651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:66376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.932 [2024-05-14 02:22:02.319660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.932 [2024-05-14 02:22:02.319671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:128696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.932 [2024-05-14 02:22:02.319681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.932 [2024-05-14 02:22:02.319692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:67256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.932 [2024-05-14 02:22:02.319701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.932 [2024-05-14 02:22:02.319713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:7792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.932 [2024-05-14 02:22:02.319722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.932 [2024-05-14 02:22:02.319733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:45848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.932 [2024-05-14 02:22:02.319743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.932 [2024-05-14 02:22:02.319754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:55224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.932 [2024-05-14 02:22:02.319763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.932 [2024-05-14 02:22:02.319775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.932 [2024-05-14 02:22:02.319784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.932 [2024-05-14 02:22:02.319795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:124944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.932 [2024-05-14 02:22:02.319805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.932 [2024-05-14 02:22:02.319816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:101080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.932 [2024-05-14 02:22:02.319844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.932 [2024-05-14 02:22:02.319857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:29192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.932 [2024-05-14 02:22:02.319867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.932 [2024-05-14 02:22:02.319879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:128616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.932 [2024-05-14 02:22:02.319888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.932 [2024-05-14 02:22:02.319899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:2472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.932 [2024-05-14 02:22:02.319909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.932 [2024-05-14 02:22:02.319920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:62552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.933 [2024-05-14 02:22:02.319929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.933 [2024-05-14 02:22:02.319941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:114992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.933 [2024-05-14 02:22:02.319950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.933 [2024-05-14 02:22:02.319976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:12552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.933 [2024-05-14 02:22:02.319985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.933 [2024-05-14 02:22:02.319996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:28032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.933 [2024-05-14 02:22:02.320005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.933 [2024-05-14 02:22:02.320016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:34512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.933 [2024-05-14 02:22:02.320025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.933 [2024-05-14 02:22:02.320053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:90728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.933 [2024-05-14 02:22:02.320063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.933 [2024-05-14 02:22:02.320074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.933 [2024-05-14 02:22:02.320084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.933 [2024-05-14 02:22:02.320095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:67448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.933 [2024-05-14 02:22:02.320105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.933 [2024-05-14 02:22:02.320116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:71440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.933 [2024-05-14 02:22:02.320130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.933 [2024-05-14 02:22:02.320141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:17488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.933 [2024-05-14 02:22:02.320157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.933 [2024-05-14 02:22:02.320168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:98256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.933 [2024-05-14 02:22:02.320177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.933 [2024-05-14 02:22:02.320189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:91416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.933 [2024-05-14 02:22:02.320198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.933 [2024-05-14 02:22:02.320209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:34640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.933 [2024-05-14 02:22:02.320220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.933 [2024-05-14 02:22:02.320232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:79624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.933 [2024-05-14 02:22:02.320241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.933 [2024-05-14 02:22:02.320252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.933 [2024-05-14 02:22:02.320261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.933 [2024-05-14 02:22:02.320273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:130336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.933 [2024-05-14 02:22:02.320282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.933 [2024-05-14 02:22:02.320293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:104288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.933 [2024-05-14 02:22:02.320303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.933 [2024-05-14 02:22:02.320314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:80960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.933 [2024-05-14 02:22:02.320323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.933 [2024-05-14 02:22:02.320334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:41224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.933 [2024-05-14 02:22:02.320344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.933 [2024-05-14 02:22:02.320355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:106496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.933 [2024-05-14 02:22:02.320365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.933 [2024-05-14 02:22:02.320376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:119936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.933 [2024-05-14 02:22:02.320385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.933 [2024-05-14 02:22:02.320397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:55056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.933 [2024-05-14 02:22:02.320407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.933 [2024-05-14 02:22:02.320432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:5944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.933 [2024-05-14 02:22:02.320441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.933 [2024-05-14 02:22:02.320452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:59296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.933 [2024-05-14 02:22:02.320461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.933 [2024-05-14 02:22:02.320488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:43376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.933 [2024-05-14 02:22:02.320497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.933 [2024-05-14 02:22:02.320509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:64536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.933 [2024-05-14 02:22:02.320518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.933 [2024-05-14 02:22:02.320529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.933 [2024-05-14 02:22:02.320538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.933 [2024-05-14 02:22:02.320550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:67432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.933 [2024-05-14 02:22:02.320559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.933 [2024-05-14 02:22:02.320570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:93808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.933 [2024-05-14 02:22:02.320581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.933 [2024-05-14 02:22:02.320593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:33160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.933 [2024-05-14 02:22:02.320602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.933 [2024-05-14 02:22:02.320613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.933 [2024-05-14 02:22:02.320622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.933 [2024-05-14 02:22:02.320633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:23600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.933 [2024-05-14 02:22:02.320643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.933 [2024-05-14 02:22:02.320654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:72024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.933 [2024-05-14 02:22:02.320663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.933 [2024-05-14 02:22:02.320674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:3840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.933 [2024-05-14 02:22:02.320684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.933 [2024-05-14 02:22:02.320695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.933 [2024-05-14 02:22:02.320704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.933 [2024-05-14 02:22:02.320726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.933 [2024-05-14 02:22:02.320735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.933 [2024-05-14 02:22:02.320747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:77136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.933 [2024-05-14 02:22:02.320756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.933 [2024-05-14 02:22:02.320768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:36264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.933 [2024-05-14 02:22:02.320777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.933 [2024-05-14 02:22:02.320788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:96864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.933 [2024-05-14 02:22:02.320798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.933 [2024-05-14 02:22:02.320809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:83936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.933 [2024-05-14 02:22:02.320818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.933 [2024-05-14 02:22:02.320839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:84784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.933 [2024-05-14 02:22:02.320848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.934 [2024-05-14 02:22:02.320860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:55320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.934 [2024-05-14 02:22:02.320869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.934 [2024-05-14 02:22:02.320880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:4120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.934 [2024-05-14 02:22:02.320890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.934 [2024-05-14 02:22:02.320901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.934 [2024-05-14 02:22:02.320910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.934 [2024-05-14 02:22:02.320923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:16696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.934 [2024-05-14 02:22:02.320935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.934 [2024-05-14 02:22:02.320947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:125568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.934 [2024-05-14 02:22:02.320956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.934 [2024-05-14 02:22:02.320967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:46888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.934 [2024-05-14 02:22:02.320977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.934 [2024-05-14 02:22:02.320988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:118736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.934 [2024-05-14 02:22:02.320997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.934 [2024-05-14 02:22:02.321009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:113456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.934 [2024-05-14 02:22:02.321018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.934 [2024-05-14 02:22:02.321029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:12928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.934 [2024-05-14 02:22:02.321038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.934 [2024-05-14 02:22:02.321049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:110600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.934 [2024-05-14 02:22:02.321059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.934 [2024-05-14 02:22:02.321070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:68496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.934 [2024-05-14 02:22:02.321080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.934 [2024-05-14 02:22:02.321091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:45640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.934 [2024-05-14 02:22:02.321101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.934 [2024-05-14 02:22:02.321112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:40760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.934 [2024-05-14 02:22:02.321122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.934 [2024-05-14 02:22:02.321133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:78032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.934 [2024-05-14 02:22:02.321142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.934 [2024-05-14 02:22:02.321153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:28968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.934 [2024-05-14 02:22:02.321163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.934 [2024-05-14 02:22:02.321174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:48472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.934 [2024-05-14 02:22:02.321184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.934 [2024-05-14 02:22:02.321195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:127304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.934 [2024-05-14 02:22:02.321204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.934 [2024-05-14 02:22:02.321216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:103680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.934 [2024-05-14 02:22:02.321225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.934 [2024-05-14 02:22:02.321237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:81928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.934 [2024-05-14 02:22:02.321246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.934 [2024-05-14 02:22:02.321258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:65704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.934 [2024-05-14 02:22:02.321269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.934 [2024-05-14 02:22:02.321281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:84712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.934 [2024-05-14 02:22:02.321291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.934 [2024-05-14 02:22:02.321302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:85392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.934 [2024-05-14 02:22:02.321311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.934 [2024-05-14 02:22:02.321323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:106768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.934 [2024-05-14 02:22:02.321335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.934 [2024-05-14 02:22:02.321346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:73192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.934 [2024-05-14 02:22:02.321355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.934 [2024-05-14 02:22:02.321367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:90032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.934 [2024-05-14 02:22:02.321376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.934 [2024-05-14 02:22:02.321401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:28952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.934 [2024-05-14 02:22:02.321410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.934 [2024-05-14 02:22:02.321421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:87704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.934 [2024-05-14 02:22:02.321430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.934 [2024-05-14 02:22:02.321441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.934 [2024-05-14 02:22:02.321450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.934 [2024-05-14 02:22:02.321461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:50360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.934 [2024-05-14 02:22:02.321486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.934 [2024-05-14 02:22:02.321497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:84872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.934 [2024-05-14 02:22:02.321506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.934 [2024-05-14 02:22:02.321517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:3224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.934 [2024-05-14 02:22:02.321527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.934 [2024-05-14 02:22:02.321538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:82152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.934 [2024-05-14 02:22:02.321547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.934 [2024-05-14 02:22:02.321558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:24760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.934 [2024-05-14 02:22:02.321568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.934 [2024-05-14 02:22:02.321579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:36456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.934 [2024-05-14 02:22:02.321588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.934 [2024-05-14 02:22:02.321599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:125016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.934 [2024-05-14 02:22:02.321608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.934 [2024-05-14 02:22:02.321620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:126392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.934 [2024-05-14 02:22:02.321631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.934 [2024-05-14 02:22:02.321643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:72648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.934 [2024-05-14 02:22:02.321652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.934 [2024-05-14 02:22:02.321664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:16592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.934 [2024-05-14 02:22:02.321673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.934 [2024-05-14 02:22:02.321684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.934 [2024-05-14 02:22:02.321695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.934 [2024-05-14 02:22:02.321707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:98952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.934 [2024-05-14 02:22:02.321716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.934 [2024-05-14 02:22:02.321727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:118672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.935 [2024-05-14 02:22:02.321737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.935 [2024-05-14 02:22:02.321748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:36648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.935 [2024-05-14 02:22:02.321757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.935 [2024-05-14 02:22:02.321768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:99960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.935 [2024-05-14 02:22:02.321777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.935 [2024-05-14 02:22:02.321789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:83120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.935 [2024-05-14 02:22:02.321806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.935 [2024-05-14 02:22:02.321818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:71480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.935 [2024-05-14 02:22:02.321827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.935 [2024-05-14 02:22:02.321838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:7040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.935 [2024-05-14 02:22:02.321847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.935 [2024-05-14 02:22:02.321862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:49936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.935 [2024-05-14 02:22:02.321871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.935 [2024-05-14 02:22:02.321882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:38632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.935 [2024-05-14 02:22:02.321891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.935 [2024-05-14 02:22:02.321903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:130904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.935 [2024-05-14 02:22:02.321912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.935 [2024-05-14 02:22:02.321923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:60768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.935 [2024-05-14 02:22:02.321932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.935 [2024-05-14 02:22:02.321944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:23048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.935 [2024-05-14 02:22:02.321953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.935 [2024-05-14 02:22:02.321964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:43376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.935 [2024-05-14 02:22:02.321976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.935 [2024-05-14 02:22:02.321997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:71200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.935 [2024-05-14 02:22:02.322007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.935 [2024-05-14 02:22:02.322019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:106960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.935 [2024-05-14 02:22:02.322029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.935 [2024-05-14 02:22:02.322040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:122952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.935 [2024-05-14 02:22:02.322051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.935 [2024-05-14 02:22:02.322063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:59920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.935 [2024-05-14 02:22:02.322072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.935 [2024-05-14 02:22:02.322083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:97072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.935 [2024-05-14 02:22:02.322092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.935 [2024-05-14 02:22:02.322103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:23360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.935 [2024-05-14 02:22:02.322113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.935 [2024-05-14 02:22:02.322124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:74800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.935 [2024-05-14 02:22:02.322133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.935 [2024-05-14 02:22:02.322145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:29720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.935 [2024-05-14 02:22:02.322154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.935 [2024-05-14 02:22:02.322165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:110512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.935 [2024-05-14 02:22:02.322174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.935 [2024-05-14 02:22:02.322185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:94664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.935 [2024-05-14 02:22:02.322195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.935 [2024-05-14 02:22:02.322206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:7584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.935 [2024-05-14 02:22:02.322215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.935 [2024-05-14 02:22:02.322226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:65600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.935 [2024-05-14 02:22:02.322236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.935 [2024-05-14 02:22:02.322247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:58928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.935 [2024-05-14 02:22:02.322257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.935 [2024-05-14 02:22:02.322268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:58080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.935 [2024-05-14 02:22:02.322277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.935 [2024-05-14 02:22:02.322287] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125420 is same with the state(5) to be set 00:24:47.935 [2024-05-14 02:22:02.322299] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.935 [2024-05-14 02:22:02.322306] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.935 [2024-05-14 02:22:02.322317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32264 len:8 PRP1 0x0 PRP2 0x0 00:24:47.935 [2024-05-14 02:22:02.322327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.935 [2024-05-14 02:22:02.322369] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2125420 was disconnected and freed. reset controller. 00:24:47.935 [2024-05-14 02:22:02.322651] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:47.935 [2024-05-14 02:22:02.322740] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20de170 (9): Bad file descriptor 00:24:47.935 [2024-05-14 02:22:02.322877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.935 [2024-05-14 02:22:02.322930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.935 [2024-05-14 02:22:02.322947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20de170 with addr=10.0.0.2, port=4420 00:24:47.935 [2024-05-14 02:22:02.322957] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20de170 is same with the state(5) to be set 00:24:47.935 [2024-05-14 02:22:02.322977] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20de170 (9): Bad file descriptor 00:24:47.935 [2024-05-14 02:22:02.322993] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:47.935 [2024-05-14 02:22:02.323003] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:47.935 [2024-05-14 02:22:02.323014] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:47.935 [2024-05-14 02:22:02.323034] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:47.935 [2024-05-14 02:22:02.323044] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:47.935 02:22:02 -- host/timeout.sh@128 -- # wait 88044 00:24:49.836 [2024-05-14 02:22:04.323182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.836 [2024-05-14 02:22:04.323297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.836 [2024-05-14 02:22:04.323316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20de170 with addr=10.0.0.2, port=4420 00:24:49.836 [2024-05-14 02:22:04.323328] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20de170 is same with the state(5) to be set 00:24:49.836 [2024-05-14 02:22:04.323369] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20de170 (9): Bad file descriptor 00:24:49.836 [2024-05-14 02:22:04.323388] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.836 [2024-05-14 02:22:04.323398] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.836 [2024-05-14 02:22:04.323408] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.836 [2024-05-14 02:22:04.323435] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.836 [2024-05-14 02:22:04.323446] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.741 [2024-05-14 02:22:06.323680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.741 [2024-05-14 02:22:06.323774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.741 [2024-05-14 02:22:06.323796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20de170 with addr=10.0.0.2, port=4420 00:24:51.741 [2024-05-14 02:22:06.323809] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20de170 is same with the state(5) to be set 00:24:51.741 [2024-05-14 02:22:06.323835] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20de170 (9): Bad file descriptor 00:24:51.741 [2024-05-14 02:22:06.323855] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.741 [2024-05-14 02:22:06.323865] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.741 [2024-05-14 02:22:06.323876] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.741 [2024-05-14 02:22:06.323903] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.741 [2024-05-14 02:22:06.323914] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.278 [2024-05-14 02:22:08.324042] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.848 00:24:54.848 Latency(us) 00:24:54.848 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:54.848 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:24:54.848 NVMe0n1 : 8.12 2434.37 9.51 15.77 0.00 52176.03 3753.43 7046430.72 00:24:54.848 =================================================================================================================== 00:24:54.848 Total : 2434.37 9.51 15.77 0.00 52176.03 3753.43 7046430.72 00:24:54.848 0 00:24:54.848 02:22:09 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:54.848 Attaching 5 probes... 00:24:54.848 1386.798984: reset bdev controller NVMe0 00:24:54.848 1386.964224: reconnect bdev controller NVMe0 00:24:54.848 3387.226564: reconnect delay bdev controller NVMe0 00:24:54.848 3387.243340: reconnect bdev controller NVMe0 00:24:54.848 5387.680496: reconnect delay bdev controller NVMe0 00:24:54.848 5387.698551: reconnect bdev controller NVMe0 00:24:54.848 7388.160433: reconnect delay bdev controller NVMe0 00:24:54.848 7388.178114: reconnect bdev controller NVMe0 00:24:54.848 02:22:09 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:24:54.848 02:22:09 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:24:54.848 02:22:09 -- host/timeout.sh@136 -- # kill 87991 00:24:54.848 02:22:09 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:54.848 02:22:09 -- host/timeout.sh@139 -- # killprocess 87967 00:24:54.848 02:22:09 -- common/autotest_common.sh@926 -- # '[' -z 87967 ']' 00:24:54.848 02:22:09 -- common/autotest_common.sh@930 -- # kill -0 87967 00:24:54.848 02:22:09 -- common/autotest_common.sh@931 -- # uname 00:24:54.848 02:22:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:54.848 02:22:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 87967 00:24:54.848 killing process with pid 87967 00:24:54.848 Received shutdown signal, test time was about 8.175656 seconds 00:24:54.848 00:24:54.848 Latency(us) 00:24:54.848 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:54.848 =================================================================================================================== 00:24:54.848 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:54.848 02:22:09 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:24:54.848 02:22:09 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:24:54.848 02:22:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 87967' 00:24:54.848 02:22:09 -- common/autotest_common.sh@945 -- # kill 87967 00:24:54.848 02:22:09 -- common/autotest_common.sh@950 -- # wait 87967 00:24:55.107 02:22:09 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:55.367 02:22:09 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:24:55.367 02:22:09 -- host/timeout.sh@145 -- # nvmftestfini 00:24:55.367 02:22:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:55.367 02:22:09 -- nvmf/common.sh@116 -- # sync 00:24:55.367 02:22:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:55.367 02:22:09 -- nvmf/common.sh@119 -- # set +e 00:24:55.367 02:22:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:55.367 02:22:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:55.367 rmmod nvme_tcp 00:24:55.367 rmmod nvme_fabrics 00:24:55.367 rmmod nvme_keyring 00:24:55.367 02:22:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:55.367 02:22:09 -- nvmf/common.sh@123 -- # set -e 00:24:55.367 02:22:09 -- nvmf/common.sh@124 -- # return 0 00:24:55.367 02:22:09 -- nvmf/common.sh@477 -- # '[' -n 87386 ']' 00:24:55.367 02:22:09 -- nvmf/common.sh@478 -- # killprocess 87386 00:24:55.367 02:22:09 -- common/autotest_common.sh@926 -- # '[' -z 87386 ']' 00:24:55.367 02:22:09 -- common/autotest_common.sh@930 -- # kill -0 87386 00:24:55.367 02:22:09 -- common/autotest_common.sh@931 -- # uname 00:24:55.367 02:22:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:55.367 02:22:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 87386 00:24:55.367 killing process with pid 87386 00:24:55.367 02:22:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:55.367 02:22:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:55.367 02:22:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 87386' 00:24:55.367 02:22:09 -- common/autotest_common.sh@945 -- # kill 87386 00:24:55.367 02:22:09 -- common/autotest_common.sh@950 -- # wait 87386 00:24:55.625 02:22:10 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:55.625 02:22:10 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:55.625 02:22:10 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:55.625 02:22:10 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:55.625 02:22:10 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:55.625 02:22:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:55.626 02:22:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:55.626 02:22:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:55.626 02:22:10 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:24:55.626 00:24:55.626 real 0m47.126s 00:24:55.626 user 2m18.986s 00:24:55.626 sys 0m4.788s 00:24:55.626 02:22:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:55.626 ************************************ 00:24:55.626 END TEST nvmf_timeout 00:24:55.626 02:22:10 -- common/autotest_common.sh@10 -- # set +x 00:24:55.626 ************************************ 00:24:55.885 02:22:10 -- nvmf/nvmf.sh@119 -- # [[ virt == phy ]] 00:24:55.885 02:22:10 -- nvmf/nvmf.sh@126 -- # timing_exit host 00:24:55.885 02:22:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:55.885 02:22:10 -- common/autotest_common.sh@10 -- # set +x 00:24:55.885 02:22:10 -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:24:55.885 00:24:55.885 real 18m0.135s 00:24:55.885 user 57m15.384s 00:24:55.885 sys 3m41.889s 00:24:55.885 02:22:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:55.885 02:22:10 -- common/autotest_common.sh@10 -- # set +x 00:24:55.885 ************************************ 00:24:55.885 END TEST nvmf_tcp 00:24:55.885 ************************************ 00:24:55.885 02:22:10 -- spdk/autotest.sh@296 -- # [[ 0 -eq 0 ]] 00:24:55.885 02:22:10 -- spdk/autotest.sh@297 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:24:55.885 02:22:10 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:55.885 02:22:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:55.885 02:22:10 -- common/autotest_common.sh@10 -- # set +x 00:24:55.885 ************************************ 00:24:55.885 START TEST spdkcli_nvmf_tcp 00:24:55.885 ************************************ 00:24:55.885 02:22:10 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:24:55.885 * Looking for test storage... 00:24:55.885 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:24:55.885 02:22:10 -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:24:55.885 02:22:10 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:24:55.885 02:22:10 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:24:55.885 02:22:10 -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:55.885 02:22:10 -- nvmf/common.sh@7 -- # uname -s 00:24:55.885 02:22:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:55.885 02:22:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:55.885 02:22:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:55.885 02:22:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:55.885 02:22:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:55.885 02:22:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:55.885 02:22:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:55.885 02:22:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:55.885 02:22:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:55.885 02:22:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:55.885 02:22:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:24:55.885 02:22:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:24:55.885 02:22:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:55.885 02:22:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:55.885 02:22:10 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:55.885 02:22:10 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:55.885 02:22:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:55.885 02:22:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:55.885 02:22:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:55.885 02:22:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.885 02:22:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.885 02:22:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.885 02:22:10 -- paths/export.sh@5 -- # export PATH 00:24:55.886 02:22:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.886 02:22:10 -- nvmf/common.sh@46 -- # : 0 00:24:55.886 02:22:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:55.886 02:22:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:55.886 02:22:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:55.886 02:22:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:55.886 02:22:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:55.886 02:22:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:55.886 02:22:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:55.886 02:22:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:55.886 02:22:10 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:24:55.886 02:22:10 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:24:55.886 02:22:10 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:24:55.886 02:22:10 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:24:55.886 02:22:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:55.886 02:22:10 -- common/autotest_common.sh@10 -- # set +x 00:24:55.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:55.886 02:22:10 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:24:55.886 02:22:10 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=88266 00:24:55.886 02:22:10 -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:24:55.886 02:22:10 -- spdkcli/common.sh@34 -- # waitforlisten 88266 00:24:55.886 02:22:10 -- common/autotest_common.sh@819 -- # '[' -z 88266 ']' 00:24:55.886 02:22:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:55.886 02:22:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:55.886 02:22:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:55.886 02:22:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:55.886 02:22:10 -- common/autotest_common.sh@10 -- # set +x 00:24:56.145 [2024-05-14 02:22:10.488821] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:56.145 [2024-05-14 02:22:10.489541] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88266 ] 00:24:56.145 [2024-05-14 02:22:10.623018] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:56.145 [2024-05-14 02:22:10.708663] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:56.145 [2024-05-14 02:22:10.709131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:56.145 [2024-05-14 02:22:10.709141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:57.086 02:22:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:57.086 02:22:11 -- common/autotest_common.sh@852 -- # return 0 00:24:57.086 02:22:11 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:24:57.086 02:22:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:57.086 02:22:11 -- common/autotest_common.sh@10 -- # set +x 00:24:57.086 02:22:11 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:24:57.086 02:22:11 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:24:57.086 02:22:11 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:24:57.086 02:22:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:57.086 02:22:11 -- common/autotest_common.sh@10 -- # set +x 00:24:57.086 02:22:11 -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:24:57.086 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:24:57.086 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:24:57.086 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:24:57.086 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:24:57.086 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:24:57.086 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:24:57.086 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:24:57.086 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:24:57.086 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:24:57.086 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:24:57.086 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:57.086 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:24:57.086 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:24:57.086 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:57.086 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:24:57.086 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:24:57.086 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:24:57.086 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:24:57.086 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:57.086 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:24:57.086 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:24:57.086 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:24:57.086 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:24:57.086 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:57.086 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:24:57.086 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:24:57.086 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:24:57.086 ' 00:24:57.656 [2024-05-14 02:22:12.015809] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:00.191 [2024-05-14 02:22:14.212312] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:01.131 [2024-05-14 02:22:15.477471] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:25:03.670 [2024-05-14 02:22:17.827191] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:25:05.610 [2024-05-14 02:22:19.852674] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:25:06.987 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:25:06.987 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:25:06.987 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:25:06.987 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:25:06.987 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:25:06.987 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:25:06.987 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:25:06.987 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:06.987 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:25:06.987 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:25:06.987 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:06.987 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:06.987 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:25:06.987 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:06.987 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:06.987 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:25:06.987 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:06.987 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:06.987 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:06.987 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:06.987 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:25:06.987 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:25:06.987 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:06.987 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:25:06.987 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:06.987 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:25:06.987 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:25:06.987 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:25:06.987 02:22:21 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:25:06.987 02:22:21 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:06.987 02:22:21 -- common/autotest_common.sh@10 -- # set +x 00:25:06.987 02:22:21 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:25:06.987 02:22:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:06.987 02:22:21 -- common/autotest_common.sh@10 -- # set +x 00:25:06.987 02:22:21 -- spdkcli/nvmf.sh@69 -- # check_match 00:25:06.987 02:22:21 -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:25:07.555 02:22:21 -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:25:07.555 02:22:22 -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:25:07.556 02:22:22 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:25:07.556 02:22:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:07.556 02:22:22 -- common/autotest_common.sh@10 -- # set +x 00:25:07.556 02:22:22 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:25:07.556 02:22:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:07.556 02:22:22 -- common/autotest_common.sh@10 -- # set +x 00:25:07.556 02:22:22 -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:25:07.556 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:25:07.556 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:07.556 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:25:07.556 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:25:07.556 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:25:07.556 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:25:07.556 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:07.556 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:25:07.556 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:25:07.556 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:25:07.556 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:25:07.556 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:25:07.556 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:25:07.556 ' 00:25:12.826 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:25:12.826 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:25:12.826 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:12.826 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:25:12.826 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:25:12.826 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:25:12.826 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:25:12.826 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:12.826 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:25:12.826 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:25:12.826 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:25:12.826 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:25:12.826 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:25:12.826 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:25:12.826 02:22:27 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:25:12.826 02:22:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:12.826 02:22:27 -- common/autotest_common.sh@10 -- # set +x 00:25:13.085 02:22:27 -- spdkcli/nvmf.sh@90 -- # killprocess 88266 00:25:13.085 02:22:27 -- common/autotest_common.sh@926 -- # '[' -z 88266 ']' 00:25:13.085 02:22:27 -- common/autotest_common.sh@930 -- # kill -0 88266 00:25:13.085 02:22:27 -- common/autotest_common.sh@931 -- # uname 00:25:13.085 02:22:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:13.085 02:22:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88266 00:25:13.085 killing process with pid 88266 00:25:13.085 02:22:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:13.085 02:22:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:13.085 02:22:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88266' 00:25:13.085 02:22:27 -- common/autotest_common.sh@945 -- # kill 88266 00:25:13.085 [2024-05-14 02:22:27.477630] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:13.085 02:22:27 -- common/autotest_common.sh@950 -- # wait 88266 00:25:13.344 02:22:27 -- spdkcli/nvmf.sh@1 -- # cleanup 00:25:13.344 02:22:27 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:25:13.344 02:22:27 -- spdkcli/common.sh@13 -- # '[' -n 88266 ']' 00:25:13.344 02:22:27 -- spdkcli/common.sh@14 -- # killprocess 88266 00:25:13.344 02:22:27 -- common/autotest_common.sh@926 -- # '[' -z 88266 ']' 00:25:13.344 02:22:27 -- common/autotest_common.sh@930 -- # kill -0 88266 00:25:13.344 Process with pid 88266 is not found 00:25:13.344 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (88266) - No such process 00:25:13.344 02:22:27 -- common/autotest_common.sh@953 -- # echo 'Process with pid 88266 is not found' 00:25:13.344 02:22:27 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:25:13.344 02:22:27 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:25:13.344 02:22:27 -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:25:13.344 ************************************ 00:25:13.344 END TEST spdkcli_nvmf_tcp 00:25:13.344 ************************************ 00:25:13.344 00:25:13.344 real 0m17.361s 00:25:13.344 user 0m37.317s 00:25:13.344 sys 0m0.960s 00:25:13.344 02:22:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:13.344 02:22:27 -- common/autotest_common.sh@10 -- # set +x 00:25:13.344 02:22:27 -- spdk/autotest.sh@298 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:13.344 02:22:27 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:13.344 02:22:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:13.344 02:22:27 -- common/autotest_common.sh@10 -- # set +x 00:25:13.344 ************************************ 00:25:13.344 START TEST nvmf_identify_passthru 00:25:13.344 ************************************ 00:25:13.344 02:22:27 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:13.344 * Looking for test storage... 00:25:13.344 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:13.344 02:22:27 -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:13.344 02:22:27 -- nvmf/common.sh@7 -- # uname -s 00:25:13.344 02:22:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:13.344 02:22:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:13.344 02:22:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:13.344 02:22:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:13.344 02:22:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:13.344 02:22:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:13.344 02:22:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:13.344 02:22:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:13.344 02:22:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:13.344 02:22:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:13.344 02:22:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:25:13.344 02:22:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:25:13.344 02:22:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:13.344 02:22:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:13.344 02:22:27 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:13.344 02:22:27 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:13.344 02:22:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:13.344 02:22:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:13.344 02:22:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:13.344 02:22:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.344 02:22:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.344 02:22:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.344 02:22:27 -- paths/export.sh@5 -- # export PATH 00:25:13.344 02:22:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.344 02:22:27 -- nvmf/common.sh@46 -- # : 0 00:25:13.344 02:22:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:13.344 02:22:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:13.344 02:22:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:13.344 02:22:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:13.344 02:22:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:13.344 02:22:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:13.344 02:22:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:13.344 02:22:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:13.344 02:22:27 -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:13.344 02:22:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:13.344 02:22:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:13.344 02:22:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:13.344 02:22:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.344 02:22:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.344 02:22:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.344 02:22:27 -- paths/export.sh@5 -- # export PATH 00:25:13.345 02:22:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.345 02:22:27 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:25:13.345 02:22:27 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:13.345 02:22:27 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:13.345 02:22:27 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:13.345 02:22:27 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:13.345 02:22:27 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:13.345 02:22:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:13.345 02:22:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:13.345 02:22:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:13.345 02:22:27 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:25:13.345 02:22:27 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:25:13.345 02:22:27 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:25:13.345 02:22:27 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:25:13.345 02:22:27 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:25:13.345 02:22:27 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:25:13.345 02:22:27 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:13.345 02:22:27 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:13.345 02:22:27 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:13.345 02:22:27 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:25:13.345 02:22:27 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:13.345 02:22:27 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:13.345 02:22:27 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:13.345 02:22:27 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:13.345 02:22:27 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:13.345 02:22:27 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:13.345 02:22:27 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:13.345 02:22:27 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:13.345 02:22:27 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:25:13.345 02:22:27 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:25:13.345 Cannot find device "nvmf_tgt_br" 00:25:13.345 02:22:27 -- nvmf/common.sh@154 -- # true 00:25:13.345 02:22:27 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:25:13.345 Cannot find device "nvmf_tgt_br2" 00:25:13.345 02:22:27 -- nvmf/common.sh@155 -- # true 00:25:13.345 02:22:27 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:25:13.345 02:22:27 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:25:13.345 Cannot find device "nvmf_tgt_br" 00:25:13.345 02:22:27 -- nvmf/common.sh@157 -- # true 00:25:13.345 02:22:27 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:25:13.345 Cannot find device "nvmf_tgt_br2" 00:25:13.345 02:22:27 -- nvmf/common.sh@158 -- # true 00:25:13.345 02:22:27 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:25:13.603 02:22:27 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:25:13.603 02:22:27 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:13.603 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:13.603 02:22:27 -- nvmf/common.sh@161 -- # true 00:25:13.603 02:22:27 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:13.603 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:13.603 02:22:27 -- nvmf/common.sh@162 -- # true 00:25:13.603 02:22:27 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:25:13.603 02:22:27 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:13.603 02:22:27 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:13.603 02:22:27 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:13.603 02:22:27 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:13.603 02:22:28 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:13.603 02:22:28 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:13.603 02:22:28 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:13.603 02:22:28 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:13.603 02:22:28 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:25:13.603 02:22:28 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:25:13.603 02:22:28 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:25:13.603 02:22:28 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:25:13.603 02:22:28 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:13.603 02:22:28 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:13.603 02:22:28 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:13.603 02:22:28 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:25:13.603 02:22:28 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:25:13.603 02:22:28 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:25:13.603 02:22:28 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:13.603 02:22:28 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:13.603 02:22:28 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:13.603 02:22:28 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:13.603 02:22:28 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:25:13.603 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:13.603 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:25:13.603 00:25:13.603 --- 10.0.0.2 ping statistics --- 00:25:13.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.603 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:25:13.603 02:22:28 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:25:13.603 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:13.603 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:25:13.603 00:25:13.603 --- 10.0.0.3 ping statistics --- 00:25:13.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.603 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:25:13.603 02:22:28 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:13.603 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:13.603 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:25:13.603 00:25:13.603 --- 10.0.0.1 ping statistics --- 00:25:13.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.603 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:25:13.603 02:22:28 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:13.603 02:22:28 -- nvmf/common.sh@421 -- # return 0 00:25:13.603 02:22:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:13.603 02:22:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:13.603 02:22:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:13.603 02:22:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:13.603 02:22:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:13.603 02:22:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:13.603 02:22:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:13.862 02:22:28 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:25:13.862 02:22:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:13.862 02:22:28 -- common/autotest_common.sh@10 -- # set +x 00:25:13.862 02:22:28 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:25:13.862 02:22:28 -- common/autotest_common.sh@1509 -- # bdfs=() 00:25:13.862 02:22:28 -- common/autotest_common.sh@1509 -- # local bdfs 00:25:13.862 02:22:28 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:25:13.862 02:22:28 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:25:13.862 02:22:28 -- common/autotest_common.sh@1498 -- # bdfs=() 00:25:13.862 02:22:28 -- common/autotest_common.sh@1498 -- # local bdfs 00:25:13.862 02:22:28 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:25:13.862 02:22:28 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:25:13.862 02:22:28 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:25:13.862 02:22:28 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:25:13.862 02:22:28 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:25:13.862 02:22:28 -- common/autotest_common.sh@1512 -- # echo 0000:00:06.0 00:25:13.862 02:22:28 -- target/identify_passthru.sh@16 -- # bdf=0000:00:06.0 00:25:13.862 02:22:28 -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:06.0 ']' 00:25:13.862 02:22:28 -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:25:13.862 02:22:28 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:25:13.862 02:22:28 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:25:13.862 02:22:28 -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:25:13.862 02:22:28 -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:25:13.862 02:22:28 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:25:13.862 02:22:28 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:25:14.121 02:22:28 -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:25:14.121 02:22:28 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:25:14.121 02:22:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:14.121 02:22:28 -- common/autotest_common.sh@10 -- # set +x 00:25:14.121 02:22:28 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:25:14.121 02:22:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:14.121 02:22:28 -- common/autotest_common.sh@10 -- # set +x 00:25:14.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:14.121 02:22:28 -- target/identify_passthru.sh@31 -- # nvmfpid=88762 00:25:14.121 02:22:28 -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:14.121 02:22:28 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:14.121 02:22:28 -- target/identify_passthru.sh@35 -- # waitforlisten 88762 00:25:14.121 02:22:28 -- common/autotest_common.sh@819 -- # '[' -z 88762 ']' 00:25:14.121 02:22:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:14.121 02:22:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:14.121 02:22:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:14.121 02:22:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:14.121 02:22:28 -- common/autotest_common.sh@10 -- # set +x 00:25:14.380 [2024-05-14 02:22:28.728472] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:14.380 [2024-05-14 02:22:28.728727] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:14.380 [2024-05-14 02:22:28.865631] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:14.380 [2024-05-14 02:22:28.940730] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:14.380 [2024-05-14 02:22:28.941131] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:14.381 [2024-05-14 02:22:28.941269] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:14.381 [2024-05-14 02:22:28.941431] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:14.381 [2024-05-14 02:22:28.941694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:14.381 [2024-05-14 02:22:28.941801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:14.381 [2024-05-14 02:22:28.941860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:14.381 [2024-05-14 02:22:28.941861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:15.316 02:22:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:15.316 02:22:29 -- common/autotest_common.sh@852 -- # return 0 00:25:15.316 02:22:29 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:25:15.316 02:22:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:15.316 02:22:29 -- common/autotest_common.sh@10 -- # set +x 00:25:15.317 02:22:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:15.317 02:22:29 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:25:15.317 02:22:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:15.317 02:22:29 -- common/autotest_common.sh@10 -- # set +x 00:25:15.317 [2024-05-14 02:22:29.800219] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:25:15.317 02:22:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:15.317 02:22:29 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:15.317 02:22:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:15.317 02:22:29 -- common/autotest_common.sh@10 -- # set +x 00:25:15.317 [2024-05-14 02:22:29.809497] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:15.317 02:22:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:15.317 02:22:29 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:25:15.317 02:22:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:15.317 02:22:29 -- common/autotest_common.sh@10 -- # set +x 00:25:15.317 02:22:29 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:25:15.317 02:22:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:15.317 02:22:29 -- common/autotest_common.sh@10 -- # set +x 00:25:15.575 Nvme0n1 00:25:15.575 02:22:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:15.575 02:22:29 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:25:15.575 02:22:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:15.575 02:22:29 -- common/autotest_common.sh@10 -- # set +x 00:25:15.575 02:22:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:15.575 02:22:29 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:15.575 02:22:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:15.575 02:22:29 -- common/autotest_common.sh@10 -- # set +x 00:25:15.575 02:22:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:15.575 02:22:29 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:15.575 02:22:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:15.575 02:22:29 -- common/autotest_common.sh@10 -- # set +x 00:25:15.575 [2024-05-14 02:22:29.942511] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:15.575 02:22:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:15.575 02:22:29 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:25:15.575 02:22:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:15.575 02:22:29 -- common/autotest_common.sh@10 -- # set +x 00:25:15.575 [2024-05-14 02:22:29.950311] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:15.575 [ 00:25:15.575 { 00:25:15.575 "allow_any_host": true, 00:25:15.575 "hosts": [], 00:25:15.575 "listen_addresses": [], 00:25:15.575 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:15.575 "subtype": "Discovery" 00:25:15.575 }, 00:25:15.575 { 00:25:15.575 "allow_any_host": true, 00:25:15.575 "hosts": [], 00:25:15.575 "listen_addresses": [ 00:25:15.575 { 00:25:15.575 "adrfam": "IPv4", 00:25:15.575 "traddr": "10.0.0.2", 00:25:15.575 "transport": "TCP", 00:25:15.575 "trsvcid": "4420", 00:25:15.575 "trtype": "TCP" 00:25:15.575 } 00:25:15.575 ], 00:25:15.575 "max_cntlid": 65519, 00:25:15.575 "max_namespaces": 1, 00:25:15.575 "min_cntlid": 1, 00:25:15.575 "model_number": "SPDK bdev Controller", 00:25:15.575 "namespaces": [ 00:25:15.575 { 00:25:15.575 "bdev_name": "Nvme0n1", 00:25:15.575 "name": "Nvme0n1", 00:25:15.575 "nguid": "1B138F74DB5043329AF658B3ACE273B0", 00:25:15.575 "nsid": 1, 00:25:15.575 "uuid": "1b138f74-db50-4332-9af6-58b3ace273b0" 00:25:15.575 } 00:25:15.575 ], 00:25:15.575 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:15.575 "serial_number": "SPDK00000000000001", 00:25:15.575 "subtype": "NVMe" 00:25:15.575 } 00:25:15.575 ] 00:25:15.575 02:22:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:15.575 02:22:29 -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:15.575 02:22:29 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:25:15.575 02:22:29 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:25:15.834 02:22:30 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:25:15.834 02:22:30 -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:15.834 02:22:30 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:25:15.834 02:22:30 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:25:15.834 02:22:30 -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:25:15.834 02:22:30 -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:25:15.834 02:22:30 -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:25:15.834 02:22:30 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:15.834 02:22:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:15.834 02:22:30 -- common/autotest_common.sh@10 -- # set +x 00:25:15.834 02:22:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:15.834 02:22:30 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:25:15.834 02:22:30 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:25:15.834 02:22:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:15.834 02:22:30 -- nvmf/common.sh@116 -- # sync 00:25:16.094 02:22:30 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:16.094 02:22:30 -- nvmf/common.sh@119 -- # set +e 00:25:16.094 02:22:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:16.094 02:22:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:16.094 rmmod nvme_tcp 00:25:16.094 rmmod nvme_fabrics 00:25:16.094 rmmod nvme_keyring 00:25:16.094 02:22:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:16.094 02:22:30 -- nvmf/common.sh@123 -- # set -e 00:25:16.094 02:22:30 -- nvmf/common.sh@124 -- # return 0 00:25:16.094 02:22:30 -- nvmf/common.sh@477 -- # '[' -n 88762 ']' 00:25:16.094 02:22:30 -- nvmf/common.sh@478 -- # killprocess 88762 00:25:16.094 02:22:30 -- common/autotest_common.sh@926 -- # '[' -z 88762 ']' 00:25:16.094 02:22:30 -- common/autotest_common.sh@930 -- # kill -0 88762 00:25:16.094 02:22:30 -- common/autotest_common.sh@931 -- # uname 00:25:16.094 02:22:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:16.094 02:22:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88762 00:25:16.094 02:22:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:16.094 02:22:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:16.094 killing process with pid 88762 00:25:16.094 02:22:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88762' 00:25:16.094 02:22:30 -- common/autotest_common.sh@945 -- # kill 88762 00:25:16.094 [2024-05-14 02:22:30.553825] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:16.094 02:22:30 -- common/autotest_common.sh@950 -- # wait 88762 00:25:16.353 02:22:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:16.353 02:22:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:16.353 02:22:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:16.353 02:22:30 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:16.353 02:22:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:16.353 02:22:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:16.353 02:22:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:16.353 02:22:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:16.353 02:22:30 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:25:16.353 ************************************ 00:25:16.353 END TEST nvmf_identify_passthru 00:25:16.353 00:25:16.353 real 0m3.069s 00:25:16.353 user 0m7.722s 00:25:16.353 sys 0m0.734s 00:25:16.353 02:22:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:16.353 02:22:30 -- common/autotest_common.sh@10 -- # set +x 00:25:16.353 ************************************ 00:25:16.353 02:22:30 -- spdk/autotest.sh@300 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:25:16.353 02:22:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:16.353 02:22:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:16.353 02:22:30 -- common/autotest_common.sh@10 -- # set +x 00:25:16.353 ************************************ 00:25:16.353 START TEST nvmf_dif 00:25:16.353 ************************************ 00:25:16.353 02:22:30 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:25:16.353 * Looking for test storage... 00:25:16.353 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:16.353 02:22:30 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:16.353 02:22:30 -- nvmf/common.sh@7 -- # uname -s 00:25:16.353 02:22:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:16.353 02:22:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:16.353 02:22:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:16.353 02:22:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:16.353 02:22:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:16.353 02:22:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:16.353 02:22:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:16.353 02:22:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:16.353 02:22:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:16.353 02:22:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:16.613 02:22:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:25:16.613 02:22:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:25:16.613 02:22:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:16.613 02:22:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:16.613 02:22:30 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:16.613 02:22:30 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:16.613 02:22:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:16.613 02:22:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:16.613 02:22:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:16.613 02:22:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.613 02:22:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.613 02:22:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.613 02:22:30 -- paths/export.sh@5 -- # export PATH 00:25:16.613 02:22:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.613 02:22:30 -- nvmf/common.sh@46 -- # : 0 00:25:16.613 02:22:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:16.613 02:22:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:16.613 02:22:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:16.613 02:22:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:16.613 02:22:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:16.613 02:22:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:16.613 02:22:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:16.613 02:22:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:16.613 02:22:30 -- target/dif.sh@15 -- # NULL_META=16 00:25:16.613 02:22:30 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:25:16.613 02:22:30 -- target/dif.sh@15 -- # NULL_SIZE=64 00:25:16.613 02:22:30 -- target/dif.sh@15 -- # NULL_DIF=1 00:25:16.613 02:22:30 -- target/dif.sh@135 -- # nvmftestinit 00:25:16.613 02:22:30 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:16.613 02:22:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:16.613 02:22:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:16.613 02:22:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:16.613 02:22:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:16.613 02:22:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:16.613 02:22:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:16.613 02:22:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:16.613 02:22:30 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:25:16.613 02:22:30 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:25:16.613 02:22:30 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:25:16.613 02:22:30 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:25:16.613 02:22:30 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:25:16.613 02:22:30 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:25:16.613 02:22:30 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:16.613 02:22:30 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:16.613 02:22:30 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:16.613 02:22:30 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:25:16.613 02:22:30 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:16.613 02:22:30 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:16.613 02:22:30 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:16.613 02:22:30 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:16.613 02:22:30 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:16.613 02:22:30 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:16.613 02:22:30 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:16.613 02:22:30 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:16.613 02:22:30 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:25:16.613 02:22:30 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:25:16.613 Cannot find device "nvmf_tgt_br" 00:25:16.613 02:22:30 -- nvmf/common.sh@154 -- # true 00:25:16.613 02:22:30 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:25:16.613 Cannot find device "nvmf_tgt_br2" 00:25:16.613 02:22:31 -- nvmf/common.sh@155 -- # true 00:25:16.613 02:22:31 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:25:16.613 02:22:31 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:25:16.613 Cannot find device "nvmf_tgt_br" 00:25:16.613 02:22:31 -- nvmf/common.sh@157 -- # true 00:25:16.613 02:22:31 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:25:16.613 Cannot find device "nvmf_tgt_br2" 00:25:16.613 02:22:31 -- nvmf/common.sh@158 -- # true 00:25:16.613 02:22:31 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:25:16.613 02:22:31 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:25:16.613 02:22:31 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:16.613 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:16.613 02:22:31 -- nvmf/common.sh@161 -- # true 00:25:16.613 02:22:31 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:16.613 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:16.613 02:22:31 -- nvmf/common.sh@162 -- # true 00:25:16.613 02:22:31 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:25:16.613 02:22:31 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:16.613 02:22:31 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:16.613 02:22:31 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:16.613 02:22:31 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:16.613 02:22:31 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:16.613 02:22:31 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:16.613 02:22:31 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:16.613 02:22:31 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:16.613 02:22:31 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:25:16.613 02:22:31 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:25:16.613 02:22:31 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:25:16.613 02:22:31 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:25:16.613 02:22:31 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:16.613 02:22:31 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:16.613 02:22:31 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:16.613 02:22:31 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:25:16.613 02:22:31 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:25:16.613 02:22:31 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:25:16.873 02:22:31 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:16.873 02:22:31 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:16.873 02:22:31 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:16.873 02:22:31 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:16.873 02:22:31 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:25:16.873 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:16.873 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:25:16.873 00:25:16.873 --- 10.0.0.2 ping statistics --- 00:25:16.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:16.873 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:25:16.873 02:22:31 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:25:16.873 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:16.873 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:25:16.873 00:25:16.873 --- 10.0.0.3 ping statistics --- 00:25:16.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:16.873 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:25:16.873 02:22:31 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:16.873 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:16.873 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.058 ms 00:25:16.873 00:25:16.873 --- 10.0.0.1 ping statistics --- 00:25:16.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:16.873 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:25:16.873 02:22:31 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:16.873 02:22:31 -- nvmf/common.sh@421 -- # return 0 00:25:16.873 02:22:31 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:25:16.873 02:22:31 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:17.132 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:17.132 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:17.132 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:17.132 02:22:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:17.132 02:22:31 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:17.132 02:22:31 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:17.132 02:22:31 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:17.132 02:22:31 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:17.132 02:22:31 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:17.132 02:22:31 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:25:17.132 02:22:31 -- target/dif.sh@137 -- # nvmfappstart 00:25:17.133 02:22:31 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:17.133 02:22:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:17.133 02:22:31 -- common/autotest_common.sh@10 -- # set +x 00:25:17.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:17.133 02:22:31 -- nvmf/common.sh@469 -- # nvmfpid=89101 00:25:17.133 02:22:31 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:17.133 02:22:31 -- nvmf/common.sh@470 -- # waitforlisten 89101 00:25:17.133 02:22:31 -- common/autotest_common.sh@819 -- # '[' -z 89101 ']' 00:25:17.133 02:22:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:17.133 02:22:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:17.133 02:22:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:17.133 02:22:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:17.133 02:22:31 -- common/autotest_common.sh@10 -- # set +x 00:25:17.408 [2024-05-14 02:22:31.771884] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:17.408 [2024-05-14 02:22:31.771974] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:17.408 [2024-05-14 02:22:31.915851] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:17.678 [2024-05-14 02:22:31.992058] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:17.678 [2024-05-14 02:22:31.992219] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:17.678 [2024-05-14 02:22:31.992236] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:17.678 [2024-05-14 02:22:31.992247] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:17.678 [2024-05-14 02:22:31.992282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:18.245 02:22:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:18.246 02:22:32 -- common/autotest_common.sh@852 -- # return 0 00:25:18.246 02:22:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:18.246 02:22:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:18.246 02:22:32 -- common/autotest_common.sh@10 -- # set +x 00:25:18.505 02:22:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:18.505 02:22:32 -- target/dif.sh@139 -- # create_transport 00:25:18.505 02:22:32 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:25:18.505 02:22:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:18.505 02:22:32 -- common/autotest_common.sh@10 -- # set +x 00:25:18.505 [2024-05-14 02:22:32.862442] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:18.505 02:22:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:18.505 02:22:32 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:25:18.505 02:22:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:18.505 02:22:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:18.505 02:22:32 -- common/autotest_common.sh@10 -- # set +x 00:25:18.505 ************************************ 00:25:18.505 START TEST fio_dif_1_default 00:25:18.505 ************************************ 00:25:18.505 02:22:32 -- common/autotest_common.sh@1104 -- # fio_dif_1 00:25:18.505 02:22:32 -- target/dif.sh@86 -- # create_subsystems 0 00:25:18.505 02:22:32 -- target/dif.sh@28 -- # local sub 00:25:18.505 02:22:32 -- target/dif.sh@30 -- # for sub in "$@" 00:25:18.505 02:22:32 -- target/dif.sh@31 -- # create_subsystem 0 00:25:18.505 02:22:32 -- target/dif.sh@18 -- # local sub_id=0 00:25:18.506 02:22:32 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:18.506 02:22:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:18.506 02:22:32 -- common/autotest_common.sh@10 -- # set +x 00:25:18.506 bdev_null0 00:25:18.506 02:22:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:18.506 02:22:32 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:18.506 02:22:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:18.506 02:22:32 -- common/autotest_common.sh@10 -- # set +x 00:25:18.506 02:22:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:18.506 02:22:32 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:18.506 02:22:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:18.506 02:22:32 -- common/autotest_common.sh@10 -- # set +x 00:25:18.506 02:22:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:18.506 02:22:32 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:18.506 02:22:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:18.506 02:22:32 -- common/autotest_common.sh@10 -- # set +x 00:25:18.506 [2024-05-14 02:22:32.910510] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:18.506 02:22:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:18.506 02:22:32 -- target/dif.sh@87 -- # fio /dev/fd/62 00:25:18.506 02:22:32 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:25:18.506 02:22:32 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:18.506 02:22:32 -- nvmf/common.sh@520 -- # config=() 00:25:18.506 02:22:32 -- nvmf/common.sh@520 -- # local subsystem config 00:25:18.506 02:22:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:18.506 02:22:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:18.506 { 00:25:18.506 "params": { 00:25:18.506 "name": "Nvme$subsystem", 00:25:18.506 "trtype": "$TEST_TRANSPORT", 00:25:18.506 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:18.506 "adrfam": "ipv4", 00:25:18.506 "trsvcid": "$NVMF_PORT", 00:25:18.506 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:18.506 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:18.506 "hdgst": ${hdgst:-false}, 00:25:18.506 "ddgst": ${ddgst:-false} 00:25:18.506 }, 00:25:18.506 "method": "bdev_nvme_attach_controller" 00:25:18.506 } 00:25:18.506 EOF 00:25:18.506 )") 00:25:18.506 02:22:32 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:18.506 02:22:32 -- target/dif.sh@82 -- # gen_fio_conf 00:25:18.506 02:22:32 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:18.506 02:22:32 -- target/dif.sh@54 -- # local file 00:25:18.506 02:22:32 -- nvmf/common.sh@542 -- # cat 00:25:18.506 02:22:32 -- target/dif.sh@56 -- # cat 00:25:18.506 02:22:32 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:25:18.506 02:22:32 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:18.506 02:22:32 -- common/autotest_common.sh@1318 -- # local sanitizers 00:25:18.506 02:22:32 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:18.506 02:22:32 -- common/autotest_common.sh@1320 -- # shift 00:25:18.506 02:22:32 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:25:18.506 02:22:32 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:18.506 02:22:32 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:18.506 02:22:32 -- target/dif.sh@72 -- # (( file <= files )) 00:25:18.506 02:22:32 -- common/autotest_common.sh@1324 -- # grep libasan 00:25:18.506 02:22:32 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:18.506 02:22:32 -- nvmf/common.sh@544 -- # jq . 00:25:18.506 02:22:32 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:18.506 02:22:32 -- nvmf/common.sh@545 -- # IFS=, 00:25:18.506 02:22:32 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:18.506 "params": { 00:25:18.506 "name": "Nvme0", 00:25:18.506 "trtype": "tcp", 00:25:18.506 "traddr": "10.0.0.2", 00:25:18.506 "adrfam": "ipv4", 00:25:18.506 "trsvcid": "4420", 00:25:18.506 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:18.506 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:18.506 "hdgst": false, 00:25:18.506 "ddgst": false 00:25:18.506 }, 00:25:18.506 "method": "bdev_nvme_attach_controller" 00:25:18.506 }' 00:25:18.506 02:22:32 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:18.506 02:22:32 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:18.506 02:22:32 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:18.506 02:22:32 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:25:18.506 02:22:32 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:18.506 02:22:32 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:18.506 02:22:32 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:18.506 02:22:32 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:18.506 02:22:32 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:18.506 02:22:32 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:18.766 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:18.766 fio-3.35 00:25:18.766 Starting 1 thread 00:25:19.025 [2024-05-14 02:22:33.522907] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:19.025 [2024-05-14 02:22:33.522978] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:25:31.233 00:25:31.233 filename0: (groupid=0, jobs=1): err= 0: pid=89185: Tue May 14 02:22:43 2024 00:25:31.233 read: IOPS=3055, BW=11.9MiB/s (12.5MB/s)(120MiB/10029msec) 00:25:31.233 slat (nsec): min=6962, max=81798, avg=10020.93, stdev=5119.36 00:25:31.233 clat (usec): min=420, max=41912, avg=1279.47, stdev=5430.77 00:25:31.233 lat (usec): min=428, max=41923, avg=1289.50, stdev=5430.80 00:25:31.233 clat percentiles (usec): 00:25:31.233 | 1.00th=[ 445], 5.00th=[ 465], 10.00th=[ 478], 20.00th=[ 494], 00:25:31.233 | 30.00th=[ 510], 40.00th=[ 523], 50.00th=[ 537], 60.00th=[ 545], 00:25:31.233 | 70.00th=[ 562], 80.00th=[ 586], 90.00th=[ 611], 95.00th=[ 635], 00:25:31.233 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:25:31.233 | 99.99th=[41681] 00:25:31.233 bw ( KiB/s): min= 7584, max=16768, per=100.00%, avg=12261.45, stdev=2166.99, samples=20 00:25:31.233 iops : min= 1896, max= 4192, avg=3065.30, stdev=541.74, samples=20 00:25:31.233 lat (usec) : 500=23.44%, 750=74.56%, 1000=0.16% 00:25:31.233 lat (msec) : 10=0.01%, 50=1.83% 00:25:31.233 cpu : usr=89.10%, sys=9.18%, ctx=32, majf=0, minf=0 00:25:31.233 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:31.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:31.233 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:31.233 issued rwts: total=30648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:31.233 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:31.233 00:25:31.233 Run status group 0 (all jobs): 00:25:31.233 READ: bw=11.9MiB/s (12.5MB/s), 11.9MiB/s-11.9MiB/s (12.5MB/s-12.5MB/s), io=120MiB (126MB), run=10029-10029msec 00:25:31.233 02:22:43 -- target/dif.sh@88 -- # destroy_subsystems 0 00:25:31.233 02:22:43 -- target/dif.sh@43 -- # local sub 00:25:31.233 02:22:43 -- target/dif.sh@45 -- # for sub in "$@" 00:25:31.233 02:22:43 -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:31.233 02:22:43 -- target/dif.sh@36 -- # local sub_id=0 00:25:31.233 02:22:43 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:31.233 02:22:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:31.233 02:22:43 -- common/autotest_common.sh@10 -- # set +x 00:25:31.233 02:22:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:31.233 02:22:43 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:31.233 02:22:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:31.233 02:22:43 -- common/autotest_common.sh@10 -- # set +x 00:25:31.233 02:22:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:31.233 00:25:31.233 real 0m10.995s 00:25:31.233 user 0m9.564s 00:25:31.233 sys 0m1.167s 00:25:31.233 02:22:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:31.233 ************************************ 00:25:31.233 END TEST fio_dif_1_default 00:25:31.233 ************************************ 00:25:31.233 02:22:43 -- common/autotest_common.sh@10 -- # set +x 00:25:31.233 02:22:43 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:25:31.233 02:22:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:31.233 02:22:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:31.233 02:22:43 -- common/autotest_common.sh@10 -- # set +x 00:25:31.233 ************************************ 00:25:31.233 START TEST fio_dif_1_multi_subsystems 00:25:31.233 ************************************ 00:25:31.233 02:22:43 -- common/autotest_common.sh@1104 -- # fio_dif_1_multi_subsystems 00:25:31.233 02:22:43 -- target/dif.sh@92 -- # local files=1 00:25:31.233 02:22:43 -- target/dif.sh@94 -- # create_subsystems 0 1 00:25:31.233 02:22:43 -- target/dif.sh@28 -- # local sub 00:25:31.233 02:22:43 -- target/dif.sh@30 -- # for sub in "$@" 00:25:31.233 02:22:43 -- target/dif.sh@31 -- # create_subsystem 0 00:25:31.233 02:22:43 -- target/dif.sh@18 -- # local sub_id=0 00:25:31.233 02:22:43 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:31.233 02:22:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:31.233 02:22:43 -- common/autotest_common.sh@10 -- # set +x 00:25:31.233 bdev_null0 00:25:31.233 02:22:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:31.233 02:22:43 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:31.233 02:22:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:31.233 02:22:43 -- common/autotest_common.sh@10 -- # set +x 00:25:31.233 02:22:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:31.233 02:22:43 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:31.233 02:22:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:31.233 02:22:43 -- common/autotest_common.sh@10 -- # set +x 00:25:31.233 02:22:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:31.233 02:22:43 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:31.233 02:22:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:31.233 02:22:43 -- common/autotest_common.sh@10 -- # set +x 00:25:31.233 [2024-05-14 02:22:43.955628] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:31.233 02:22:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:31.233 02:22:43 -- target/dif.sh@30 -- # for sub in "$@" 00:25:31.233 02:22:43 -- target/dif.sh@31 -- # create_subsystem 1 00:25:31.233 02:22:43 -- target/dif.sh@18 -- # local sub_id=1 00:25:31.233 02:22:43 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:25:31.233 02:22:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:31.233 02:22:43 -- common/autotest_common.sh@10 -- # set +x 00:25:31.233 bdev_null1 00:25:31.233 02:22:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:31.233 02:22:43 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:31.233 02:22:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:31.233 02:22:43 -- common/autotest_common.sh@10 -- # set +x 00:25:31.233 02:22:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:31.233 02:22:43 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:31.233 02:22:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:31.233 02:22:43 -- common/autotest_common.sh@10 -- # set +x 00:25:31.233 02:22:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:31.233 02:22:43 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:31.233 02:22:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:31.233 02:22:43 -- common/autotest_common.sh@10 -- # set +x 00:25:31.233 02:22:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:31.233 02:22:43 -- target/dif.sh@95 -- # fio /dev/fd/62 00:25:31.233 02:22:43 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:25:31.233 02:22:43 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:25:31.233 02:22:43 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:31.233 02:22:43 -- nvmf/common.sh@520 -- # config=() 00:25:31.233 02:22:43 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:31.233 02:22:43 -- nvmf/common.sh@520 -- # local subsystem config 00:25:31.233 02:22:43 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:25:31.233 02:22:43 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:31.233 02:22:43 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:31.233 02:22:43 -- target/dif.sh@82 -- # gen_fio_conf 00:25:31.233 02:22:43 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:31.233 { 00:25:31.233 "params": { 00:25:31.233 "name": "Nvme$subsystem", 00:25:31.233 "trtype": "$TEST_TRANSPORT", 00:25:31.233 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:31.233 "adrfam": "ipv4", 00:25:31.233 "trsvcid": "$NVMF_PORT", 00:25:31.233 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:31.233 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:31.233 "hdgst": ${hdgst:-false}, 00:25:31.233 "ddgst": ${ddgst:-false} 00:25:31.233 }, 00:25:31.233 "method": "bdev_nvme_attach_controller" 00:25:31.233 } 00:25:31.233 EOF 00:25:31.233 )") 00:25:31.233 02:22:43 -- common/autotest_common.sh@1318 -- # local sanitizers 00:25:31.233 02:22:43 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:31.233 02:22:43 -- target/dif.sh@54 -- # local file 00:25:31.233 02:22:43 -- common/autotest_common.sh@1320 -- # shift 00:25:31.233 02:22:43 -- target/dif.sh@56 -- # cat 00:25:31.233 02:22:43 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:25:31.233 02:22:43 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:31.233 02:22:43 -- nvmf/common.sh@542 -- # cat 00:25:31.233 02:22:43 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:31.233 02:22:43 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:31.233 02:22:43 -- common/autotest_common.sh@1324 -- # grep libasan 00:25:31.233 02:22:43 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:31.233 02:22:43 -- target/dif.sh@72 -- # (( file <= files )) 00:25:31.233 02:22:43 -- target/dif.sh@73 -- # cat 00:25:31.234 02:22:43 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:31.234 02:22:43 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:31.234 { 00:25:31.234 "params": { 00:25:31.234 "name": "Nvme$subsystem", 00:25:31.234 "trtype": "$TEST_TRANSPORT", 00:25:31.234 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:31.234 "adrfam": "ipv4", 00:25:31.234 "trsvcid": "$NVMF_PORT", 00:25:31.234 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:31.234 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:31.234 "hdgst": ${hdgst:-false}, 00:25:31.234 "ddgst": ${ddgst:-false} 00:25:31.234 }, 00:25:31.234 "method": "bdev_nvme_attach_controller" 00:25:31.234 } 00:25:31.234 EOF 00:25:31.234 )") 00:25:31.234 02:22:44 -- nvmf/common.sh@542 -- # cat 00:25:31.234 02:22:44 -- target/dif.sh@72 -- # (( file++ )) 00:25:31.234 02:22:44 -- target/dif.sh@72 -- # (( file <= files )) 00:25:31.234 02:22:44 -- nvmf/common.sh@544 -- # jq . 00:25:31.234 02:22:44 -- nvmf/common.sh@545 -- # IFS=, 00:25:31.234 02:22:44 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:31.234 "params": { 00:25:31.234 "name": "Nvme0", 00:25:31.234 "trtype": "tcp", 00:25:31.234 "traddr": "10.0.0.2", 00:25:31.234 "adrfam": "ipv4", 00:25:31.234 "trsvcid": "4420", 00:25:31.234 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:31.234 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:31.234 "hdgst": false, 00:25:31.234 "ddgst": false 00:25:31.234 }, 00:25:31.234 "method": "bdev_nvme_attach_controller" 00:25:31.234 },{ 00:25:31.234 "params": { 00:25:31.234 "name": "Nvme1", 00:25:31.234 "trtype": "tcp", 00:25:31.234 "traddr": "10.0.0.2", 00:25:31.234 "adrfam": "ipv4", 00:25:31.234 "trsvcid": "4420", 00:25:31.234 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:31.234 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:31.234 "hdgst": false, 00:25:31.234 "ddgst": false 00:25:31.234 }, 00:25:31.234 "method": "bdev_nvme_attach_controller" 00:25:31.234 }' 00:25:31.234 02:22:44 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:31.234 02:22:44 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:31.234 02:22:44 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:31.234 02:22:44 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:31.234 02:22:44 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:31.234 02:22:44 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:25:31.234 02:22:44 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:31.234 02:22:44 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:31.234 02:22:44 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:31.234 02:22:44 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:31.234 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:31.234 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:31.234 fio-3.35 00:25:31.234 Starting 2 threads 00:25:31.234 [2024-05-14 02:22:44.680697] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:31.234 [2024-05-14 02:22:44.680777] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:25:41.206 00:25:41.206 filename0: (groupid=0, jobs=1): err= 0: pid=89343: Tue May 14 02:22:54 2024 00:25:41.206 read: IOPS=183, BW=733KiB/s (751kB/s)(7360KiB/10038msec) 00:25:41.206 slat (nsec): min=7479, max=51073, avg=10760.06, stdev=4653.16 00:25:41.206 clat (usec): min=449, max=42467, avg=21789.38, stdev=20250.54 00:25:41.206 lat (usec): min=457, max=42478, avg=21800.14, stdev=20250.75 00:25:41.206 clat percentiles (usec): 00:25:41.206 | 1.00th=[ 465], 5.00th=[ 478], 10.00th=[ 494], 20.00th=[ 510], 00:25:41.206 | 30.00th=[ 537], 40.00th=[ 578], 50.00th=[40633], 60.00th=[41157], 00:25:41.206 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:25:41.206 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:25:41.206 | 99.99th=[42206] 00:25:41.206 bw ( KiB/s): min= 512, max= 960, per=48.32%, avg=734.40, stdev=123.49, samples=20 00:25:41.206 iops : min= 128, max= 240, avg=183.60, stdev=30.87, samples=20 00:25:41.206 lat (usec) : 500=14.08%, 750=29.62%, 1000=3.70% 00:25:41.206 lat (msec) : 2=0.22%, 50=52.39% 00:25:41.206 cpu : usr=95.91%, sys=3.64%, ctx=14, majf=0, minf=0 00:25:41.206 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:41.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.206 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.206 issued rwts: total=1840,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:41.206 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:41.206 filename1: (groupid=0, jobs=1): err= 0: pid=89344: Tue May 14 02:22:54 2024 00:25:41.206 read: IOPS=196, BW=787KiB/s (806kB/s)(7888KiB/10024msec) 00:25:41.206 slat (nsec): min=7227, max=42107, avg=10517.80, stdev=4494.46 00:25:41.206 clat (usec): min=448, max=42854, avg=20301.27, stdev=20273.78 00:25:41.206 lat (usec): min=456, max=42885, avg=20311.79, stdev=20274.09 00:25:41.206 clat percentiles (usec): 00:25:41.206 | 1.00th=[ 461], 5.00th=[ 482], 10.00th=[ 494], 20.00th=[ 515], 00:25:41.206 | 30.00th=[ 537], 40.00th=[ 586], 50.00th=[ 914], 60.00th=[40633], 00:25:41.206 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:25:41.206 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:25:41.207 | 99.99th=[42730] 00:25:41.207 bw ( KiB/s): min= 608, max= 1632, per=51.81%, avg=787.20, stdev=228.03, samples=20 00:25:41.207 iops : min= 152, max= 408, avg=196.80, stdev=57.01, samples=20 00:25:41.207 lat (usec) : 500=13.24%, 750=34.43%, 1000=3.40% 00:25:41.207 lat (msec) : 2=0.25%, 50=48.68% 00:25:41.207 cpu : usr=94.86%, sys=4.70%, ctx=104, majf=0, minf=9 00:25:41.207 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:41.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.207 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.207 issued rwts: total=1972,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:41.207 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:41.207 00:25:41.207 Run status group 0 (all jobs): 00:25:41.207 READ: bw=1519KiB/s (1555kB/s), 733KiB/s-787KiB/s (751kB/s-806kB/s), io=14.9MiB (15.6MB), run=10024-10038msec 00:25:41.207 02:22:55 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:25:41.207 02:22:55 -- target/dif.sh@43 -- # local sub 00:25:41.207 02:22:55 -- target/dif.sh@45 -- # for sub in "$@" 00:25:41.207 02:22:55 -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:41.207 02:22:55 -- target/dif.sh@36 -- # local sub_id=0 00:25:41.207 02:22:55 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:41.207 02:22:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:41.207 02:22:55 -- common/autotest_common.sh@10 -- # set +x 00:25:41.207 02:22:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:41.207 02:22:55 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:41.207 02:22:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:41.207 02:22:55 -- common/autotest_common.sh@10 -- # set +x 00:25:41.207 02:22:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:41.207 02:22:55 -- target/dif.sh@45 -- # for sub in "$@" 00:25:41.207 02:22:55 -- target/dif.sh@46 -- # destroy_subsystem 1 00:25:41.207 02:22:55 -- target/dif.sh@36 -- # local sub_id=1 00:25:41.207 02:22:55 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:41.207 02:22:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:41.207 02:22:55 -- common/autotest_common.sh@10 -- # set +x 00:25:41.207 02:22:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:41.207 02:22:55 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:25:41.207 02:22:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:41.207 02:22:55 -- common/autotest_common.sh@10 -- # set +x 00:25:41.207 ************************************ 00:25:41.207 END TEST fio_dif_1_multi_subsystems 00:25:41.207 ************************************ 00:25:41.207 02:22:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:41.207 00:25:41.207 real 0m11.138s 00:25:41.207 user 0m19.888s 00:25:41.207 sys 0m1.088s 00:25:41.207 02:22:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:41.207 02:22:55 -- common/autotest_common.sh@10 -- # set +x 00:25:41.207 02:22:55 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:25:41.207 02:22:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:41.207 02:22:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:41.207 02:22:55 -- common/autotest_common.sh@10 -- # set +x 00:25:41.207 ************************************ 00:25:41.207 START TEST fio_dif_rand_params 00:25:41.207 ************************************ 00:25:41.207 02:22:55 -- common/autotest_common.sh@1104 -- # fio_dif_rand_params 00:25:41.207 02:22:55 -- target/dif.sh@100 -- # local NULL_DIF 00:25:41.207 02:22:55 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:25:41.207 02:22:55 -- target/dif.sh@103 -- # NULL_DIF=3 00:25:41.207 02:22:55 -- target/dif.sh@103 -- # bs=128k 00:25:41.207 02:22:55 -- target/dif.sh@103 -- # numjobs=3 00:25:41.207 02:22:55 -- target/dif.sh@103 -- # iodepth=3 00:25:41.207 02:22:55 -- target/dif.sh@103 -- # runtime=5 00:25:41.207 02:22:55 -- target/dif.sh@105 -- # create_subsystems 0 00:25:41.207 02:22:55 -- target/dif.sh@28 -- # local sub 00:25:41.207 02:22:55 -- target/dif.sh@30 -- # for sub in "$@" 00:25:41.207 02:22:55 -- target/dif.sh@31 -- # create_subsystem 0 00:25:41.207 02:22:55 -- target/dif.sh@18 -- # local sub_id=0 00:25:41.207 02:22:55 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:25:41.207 02:22:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:41.207 02:22:55 -- common/autotest_common.sh@10 -- # set +x 00:25:41.207 bdev_null0 00:25:41.207 02:22:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:41.207 02:22:55 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:41.207 02:22:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:41.207 02:22:55 -- common/autotest_common.sh@10 -- # set +x 00:25:41.207 02:22:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:41.207 02:22:55 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:41.207 02:22:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:41.207 02:22:55 -- common/autotest_common.sh@10 -- # set +x 00:25:41.207 02:22:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:41.207 02:22:55 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:41.207 02:22:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:41.207 02:22:55 -- common/autotest_common.sh@10 -- # set +x 00:25:41.207 [2024-05-14 02:22:55.150955] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:41.207 02:22:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:41.207 02:22:55 -- target/dif.sh@106 -- # fio /dev/fd/62 00:25:41.207 02:22:55 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:25:41.207 02:22:55 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:41.207 02:22:55 -- nvmf/common.sh@520 -- # config=() 00:25:41.207 02:22:55 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:41.207 02:22:55 -- nvmf/common.sh@520 -- # local subsystem config 00:25:41.207 02:22:55 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:41.207 02:22:55 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:41.207 02:22:55 -- target/dif.sh@82 -- # gen_fio_conf 00:25:41.207 02:22:55 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:41.207 { 00:25:41.207 "params": { 00:25:41.207 "name": "Nvme$subsystem", 00:25:41.207 "trtype": "$TEST_TRANSPORT", 00:25:41.207 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:41.207 "adrfam": "ipv4", 00:25:41.207 "trsvcid": "$NVMF_PORT", 00:25:41.207 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:41.207 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:41.207 "hdgst": ${hdgst:-false}, 00:25:41.207 "ddgst": ${ddgst:-false} 00:25:41.207 }, 00:25:41.207 "method": "bdev_nvme_attach_controller" 00:25:41.207 } 00:25:41.207 EOF 00:25:41.207 )") 00:25:41.207 02:22:55 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:25:41.207 02:22:55 -- target/dif.sh@54 -- # local file 00:25:41.207 02:22:55 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:41.207 02:22:55 -- target/dif.sh@56 -- # cat 00:25:41.207 02:22:55 -- common/autotest_common.sh@1318 -- # local sanitizers 00:25:41.207 02:22:55 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:41.207 02:22:55 -- common/autotest_common.sh@1320 -- # shift 00:25:41.207 02:22:55 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:25:41.207 02:22:55 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:41.207 02:22:55 -- nvmf/common.sh@542 -- # cat 00:25:41.207 02:22:55 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:41.207 02:22:55 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:41.207 02:22:55 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:41.207 02:22:55 -- common/autotest_common.sh@1324 -- # grep libasan 00:25:41.207 02:22:55 -- target/dif.sh@72 -- # (( file <= files )) 00:25:41.207 02:22:55 -- nvmf/common.sh@544 -- # jq . 00:25:41.207 02:22:55 -- nvmf/common.sh@545 -- # IFS=, 00:25:41.207 02:22:55 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:41.207 "params": { 00:25:41.207 "name": "Nvme0", 00:25:41.207 "trtype": "tcp", 00:25:41.207 "traddr": "10.0.0.2", 00:25:41.207 "adrfam": "ipv4", 00:25:41.207 "trsvcid": "4420", 00:25:41.207 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:41.207 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:41.207 "hdgst": false, 00:25:41.207 "ddgst": false 00:25:41.207 }, 00:25:41.207 "method": "bdev_nvme_attach_controller" 00:25:41.207 }' 00:25:41.207 02:22:55 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:41.207 02:22:55 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:41.207 02:22:55 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:41.207 02:22:55 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:41.207 02:22:55 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:25:41.207 02:22:55 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:41.207 02:22:55 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:41.207 02:22:55 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:41.208 02:22:55 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:41.208 02:22:55 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:41.208 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:25:41.208 ... 00:25:41.208 fio-3.35 00:25:41.208 Starting 3 threads 00:25:41.208 [2024-05-14 02:22:55.760817] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:41.208 [2024-05-14 02:22:55.760896] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:25:46.475 00:25:46.475 filename0: (groupid=0, jobs=1): err= 0: pid=89499: Tue May 14 02:23:00 2024 00:25:46.475 read: IOPS=236, BW=29.6MiB/s (31.0MB/s)(148MiB/5003msec) 00:25:46.475 slat (nsec): min=7627, max=46649, avg=13176.35, stdev=4898.04 00:25:46.475 clat (usec): min=6385, max=54650, avg=12667.06, stdev=3261.54 00:25:46.475 lat (usec): min=6396, max=54662, avg=12680.24, stdev=3261.75 00:25:46.475 clat percentiles (usec): 00:25:46.475 | 1.00th=[ 6915], 5.00th=[ 8356], 10.00th=[10552], 20.00th=[11600], 00:25:46.475 | 30.00th=[12125], 40.00th=[12518], 50.00th=[12780], 60.00th=[13173], 00:25:46.475 | 70.00th=[13435], 80.00th=[13698], 90.00th=[14222], 95.00th=[14484], 00:25:46.475 | 99.00th=[15401], 99.50th=[50594], 99.90th=[52167], 99.95th=[54789], 00:25:46.475 | 99.99th=[54789] 00:25:46.475 bw ( KiB/s): min=28672, max=33603, per=35.34%, avg=30272.33, stdev=1578.88, samples=9 00:25:46.475 iops : min= 224, max= 262, avg=236.44, stdev=12.20, samples=9 00:25:46.475 lat (msec) : 10=8.37%, 20=91.12%, 100=0.51% 00:25:46.475 cpu : usr=92.46%, sys=6.02%, ctx=26, majf=0, minf=0 00:25:46.475 IO depths : 1=1.9%, 2=98.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:46.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:46.475 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:46.475 issued rwts: total=1183,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:46.475 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:46.475 filename0: (groupid=0, jobs=1): err= 0: pid=89500: Tue May 14 02:23:00 2024 00:25:46.476 read: IOPS=245, BW=30.7MiB/s (32.1MB/s)(153MiB/5004msec) 00:25:46.476 slat (usec): min=6, max=280, avg=13.85, stdev= 9.13 00:25:46.476 clat (usec): min=6732, max=54430, avg=12214.58, stdev=5097.35 00:25:46.476 lat (usec): min=6743, max=54446, avg=12228.42, stdev=5097.43 00:25:46.476 clat percentiles (usec): 00:25:46.476 | 1.00th=[ 8160], 5.00th=[ 9896], 10.00th=[10421], 20.00th=[10814], 00:25:46.476 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11731], 60.00th=[11994], 00:25:46.476 | 70.00th=[12125], 80.00th=[12387], 90.00th=[12911], 95.00th=[13173], 00:25:46.476 | 99.00th=[53216], 99.50th=[53740], 99.90th=[54264], 99.95th=[54264], 00:25:46.476 | 99.99th=[54264] 00:25:46.476 bw ( KiB/s): min=27648, max=33536, per=37.06%, avg=31744.00, stdev=1796.57, samples=9 00:25:46.476 iops : min= 216, max= 262, avg=248.00, stdev=14.04, samples=9 00:25:46.476 lat (msec) : 10=5.70%, 20=92.83%, 100=1.47% 00:25:46.476 cpu : usr=91.51%, sys=6.66%, ctx=72, majf=0, minf=0 00:25:46.476 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:46.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:46.476 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:46.476 issued rwts: total=1227,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:46.476 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:46.476 filename0: (groupid=0, jobs=1): err= 0: pid=89501: Tue May 14 02:23:00 2024 00:25:46.476 read: IOPS=187, BW=23.5MiB/s (24.6MB/s)(117MiB/5002msec) 00:25:46.476 slat (nsec): min=7564, max=48723, avg=11203.89, stdev=4778.83 00:25:46.476 clat (usec): min=8479, max=19177, avg=15952.94, stdev=2046.13 00:25:46.476 lat (usec): min=8487, max=19193, avg=15964.14, stdev=2046.19 00:25:46.476 clat percentiles (usec): 00:25:46.476 | 1.00th=[ 9503], 5.00th=[10552], 10.00th=[13435], 20.00th=[15139], 00:25:46.476 | 30.00th=[15664], 40.00th=[16057], 50.00th=[16319], 60.00th=[16909], 00:25:46.476 | 70.00th=[17171], 80.00th=[17433], 90.00th=[17695], 95.00th=[17957], 00:25:46.476 | 99.00th=[18744], 99.50th=[18744], 99.90th=[19268], 99.95th=[19268], 00:25:46.476 | 99.99th=[19268] 00:25:46.476 bw ( KiB/s): min=22272, max=26880, per=27.79%, avg=23808.00, stdev=1487.23, samples=9 00:25:46.476 iops : min= 174, max= 210, avg=186.00, stdev=11.62, samples=9 00:25:46.476 lat (msec) : 10=2.24%, 20=97.76% 00:25:46.476 cpu : usr=92.54%, sys=6.08%, ctx=4, majf=0, minf=0 00:25:46.476 IO depths : 1=33.2%, 2=66.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:46.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:46.476 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:46.476 issued rwts: total=939,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:46.476 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:46.476 00:25:46.476 Run status group 0 (all jobs): 00:25:46.476 READ: bw=83.7MiB/s (87.7MB/s), 23.5MiB/s-30.7MiB/s (24.6MB/s-32.1MB/s), io=419MiB (439MB), run=5002-5004msec 00:25:46.735 02:23:01 -- target/dif.sh@107 -- # destroy_subsystems 0 00:25:46.735 02:23:01 -- target/dif.sh@43 -- # local sub 00:25:46.735 02:23:01 -- target/dif.sh@45 -- # for sub in "$@" 00:25:46.735 02:23:01 -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:46.735 02:23:01 -- target/dif.sh@36 -- # local sub_id=0 00:25:46.735 02:23:01 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:46.735 02:23:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:46.735 02:23:01 -- common/autotest_common.sh@10 -- # set +x 00:25:46.735 02:23:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:46.735 02:23:01 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:46.735 02:23:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:46.735 02:23:01 -- common/autotest_common.sh@10 -- # set +x 00:25:46.735 02:23:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:46.735 02:23:01 -- target/dif.sh@109 -- # NULL_DIF=2 00:25:46.735 02:23:01 -- target/dif.sh@109 -- # bs=4k 00:25:46.735 02:23:01 -- target/dif.sh@109 -- # numjobs=8 00:25:46.735 02:23:01 -- target/dif.sh@109 -- # iodepth=16 00:25:46.735 02:23:01 -- target/dif.sh@109 -- # runtime= 00:25:46.735 02:23:01 -- target/dif.sh@109 -- # files=2 00:25:46.735 02:23:01 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:25:46.735 02:23:01 -- target/dif.sh@28 -- # local sub 00:25:46.735 02:23:01 -- target/dif.sh@30 -- # for sub in "$@" 00:25:46.735 02:23:01 -- target/dif.sh@31 -- # create_subsystem 0 00:25:46.735 02:23:01 -- target/dif.sh@18 -- # local sub_id=0 00:25:46.735 02:23:01 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:25:46.735 02:23:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:46.735 02:23:01 -- common/autotest_common.sh@10 -- # set +x 00:25:46.735 bdev_null0 00:25:46.735 02:23:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:46.735 02:23:01 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:46.735 02:23:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:46.735 02:23:01 -- common/autotest_common.sh@10 -- # set +x 00:25:46.735 02:23:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:46.735 02:23:01 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:46.735 02:23:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:46.735 02:23:01 -- common/autotest_common.sh@10 -- # set +x 00:25:46.735 02:23:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:46.735 02:23:01 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:46.735 02:23:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:46.735 02:23:01 -- common/autotest_common.sh@10 -- # set +x 00:25:46.735 [2024-05-14 02:23:01.120069] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:46.735 02:23:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:46.735 02:23:01 -- target/dif.sh@30 -- # for sub in "$@" 00:25:46.735 02:23:01 -- target/dif.sh@31 -- # create_subsystem 1 00:25:46.735 02:23:01 -- target/dif.sh@18 -- # local sub_id=1 00:25:46.735 02:23:01 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:25:46.735 02:23:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:46.735 02:23:01 -- common/autotest_common.sh@10 -- # set +x 00:25:46.735 bdev_null1 00:25:46.735 02:23:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:46.736 02:23:01 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:46.736 02:23:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:46.736 02:23:01 -- common/autotest_common.sh@10 -- # set +x 00:25:46.736 02:23:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:46.736 02:23:01 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:46.736 02:23:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:46.736 02:23:01 -- common/autotest_common.sh@10 -- # set +x 00:25:46.736 02:23:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:46.736 02:23:01 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:46.736 02:23:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:46.736 02:23:01 -- common/autotest_common.sh@10 -- # set +x 00:25:46.736 02:23:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:46.736 02:23:01 -- target/dif.sh@30 -- # for sub in "$@" 00:25:46.736 02:23:01 -- target/dif.sh@31 -- # create_subsystem 2 00:25:46.736 02:23:01 -- target/dif.sh@18 -- # local sub_id=2 00:25:46.736 02:23:01 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:25:46.736 02:23:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:46.736 02:23:01 -- common/autotest_common.sh@10 -- # set +x 00:25:46.736 bdev_null2 00:25:46.736 02:23:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:46.736 02:23:01 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:25:46.736 02:23:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:46.736 02:23:01 -- common/autotest_common.sh@10 -- # set +x 00:25:46.736 02:23:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:46.736 02:23:01 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:25:46.736 02:23:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:46.736 02:23:01 -- common/autotest_common.sh@10 -- # set +x 00:25:46.736 02:23:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:46.736 02:23:01 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:46.736 02:23:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:46.736 02:23:01 -- common/autotest_common.sh@10 -- # set +x 00:25:46.736 02:23:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:46.736 02:23:01 -- target/dif.sh@112 -- # fio /dev/fd/62 00:25:46.736 02:23:01 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:25:46.736 02:23:01 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:25:46.736 02:23:01 -- nvmf/common.sh@520 -- # config=() 00:25:46.736 02:23:01 -- nvmf/common.sh@520 -- # local subsystem config 00:25:46.736 02:23:01 -- target/dif.sh@82 -- # gen_fio_conf 00:25:46.736 02:23:01 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:46.736 02:23:01 -- target/dif.sh@54 -- # local file 00:25:46.736 02:23:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:46.736 02:23:01 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:46.736 02:23:01 -- target/dif.sh@56 -- # cat 00:25:46.736 02:23:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:46.736 { 00:25:46.736 "params": { 00:25:46.736 "name": "Nvme$subsystem", 00:25:46.736 "trtype": "$TEST_TRANSPORT", 00:25:46.736 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:46.736 "adrfam": "ipv4", 00:25:46.736 "trsvcid": "$NVMF_PORT", 00:25:46.736 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:46.736 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:46.736 "hdgst": ${hdgst:-false}, 00:25:46.736 "ddgst": ${ddgst:-false} 00:25:46.736 }, 00:25:46.736 "method": "bdev_nvme_attach_controller" 00:25:46.736 } 00:25:46.736 EOF 00:25:46.736 )") 00:25:46.736 02:23:01 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:25:46.736 02:23:01 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:46.736 02:23:01 -- common/autotest_common.sh@1318 -- # local sanitizers 00:25:46.736 02:23:01 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:46.736 02:23:01 -- nvmf/common.sh@542 -- # cat 00:25:46.736 02:23:01 -- common/autotest_common.sh@1320 -- # shift 00:25:46.736 02:23:01 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:25:46.736 02:23:01 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:46.736 02:23:01 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:46.736 02:23:01 -- target/dif.sh@72 -- # (( file <= files )) 00:25:46.736 02:23:01 -- target/dif.sh@73 -- # cat 00:25:46.736 02:23:01 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:46.736 02:23:01 -- common/autotest_common.sh@1324 -- # grep libasan 00:25:46.736 02:23:01 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:46.736 02:23:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:46.736 02:23:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:46.736 { 00:25:46.736 "params": { 00:25:46.736 "name": "Nvme$subsystem", 00:25:46.736 "trtype": "$TEST_TRANSPORT", 00:25:46.736 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:46.736 "adrfam": "ipv4", 00:25:46.736 "trsvcid": "$NVMF_PORT", 00:25:46.736 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:46.736 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:46.736 "hdgst": ${hdgst:-false}, 00:25:46.736 "ddgst": ${ddgst:-false} 00:25:46.736 }, 00:25:46.736 "method": "bdev_nvme_attach_controller" 00:25:46.736 } 00:25:46.736 EOF 00:25:46.736 )") 00:25:46.736 02:23:01 -- target/dif.sh@72 -- # (( file++ )) 00:25:46.736 02:23:01 -- target/dif.sh@72 -- # (( file <= files )) 00:25:46.736 02:23:01 -- nvmf/common.sh@542 -- # cat 00:25:46.736 02:23:01 -- target/dif.sh@73 -- # cat 00:25:46.736 02:23:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:46.736 02:23:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:46.736 { 00:25:46.736 "params": { 00:25:46.736 "name": "Nvme$subsystem", 00:25:46.736 "trtype": "$TEST_TRANSPORT", 00:25:46.736 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:46.736 "adrfam": "ipv4", 00:25:46.736 "trsvcid": "$NVMF_PORT", 00:25:46.736 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:46.736 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:46.736 "hdgst": ${hdgst:-false}, 00:25:46.736 "ddgst": ${ddgst:-false} 00:25:46.736 }, 00:25:46.736 "method": "bdev_nvme_attach_controller" 00:25:46.736 } 00:25:46.736 EOF 00:25:46.736 )") 00:25:46.736 02:23:01 -- target/dif.sh@72 -- # (( file++ )) 00:25:46.736 02:23:01 -- target/dif.sh@72 -- # (( file <= files )) 00:25:46.736 02:23:01 -- nvmf/common.sh@542 -- # cat 00:25:46.736 02:23:01 -- nvmf/common.sh@544 -- # jq . 00:25:46.736 02:23:01 -- nvmf/common.sh@545 -- # IFS=, 00:25:46.736 02:23:01 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:46.736 "params": { 00:25:46.736 "name": "Nvme0", 00:25:46.736 "trtype": "tcp", 00:25:46.736 "traddr": "10.0.0.2", 00:25:46.736 "adrfam": "ipv4", 00:25:46.736 "trsvcid": "4420", 00:25:46.736 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:46.736 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:46.736 "hdgst": false, 00:25:46.736 "ddgst": false 00:25:46.736 }, 00:25:46.736 "method": "bdev_nvme_attach_controller" 00:25:46.736 },{ 00:25:46.736 "params": { 00:25:46.736 "name": "Nvme1", 00:25:46.736 "trtype": "tcp", 00:25:46.736 "traddr": "10.0.0.2", 00:25:46.736 "adrfam": "ipv4", 00:25:46.736 "trsvcid": "4420", 00:25:46.736 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:46.736 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:46.736 "hdgst": false, 00:25:46.736 "ddgst": false 00:25:46.736 }, 00:25:46.736 "method": "bdev_nvme_attach_controller" 00:25:46.736 },{ 00:25:46.736 "params": { 00:25:46.736 "name": "Nvme2", 00:25:46.736 "trtype": "tcp", 00:25:46.736 "traddr": "10.0.0.2", 00:25:46.736 "adrfam": "ipv4", 00:25:46.736 "trsvcid": "4420", 00:25:46.736 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:46.736 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:46.736 "hdgst": false, 00:25:46.736 "ddgst": false 00:25:46.736 }, 00:25:46.736 "method": "bdev_nvme_attach_controller" 00:25:46.736 }' 00:25:46.736 02:23:01 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:46.736 02:23:01 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:46.736 02:23:01 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:46.736 02:23:01 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:46.736 02:23:01 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:25:46.736 02:23:01 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:46.736 02:23:01 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:46.736 02:23:01 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:46.736 02:23:01 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:46.736 02:23:01 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:46.996 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:46.996 ... 00:25:46.996 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:46.996 ... 00:25:46.996 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:46.996 ... 00:25:46.996 fio-3.35 00:25:46.996 Starting 24 threads 00:25:47.563 [2024-05-14 02:23:01.995177] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:47.563 [2024-05-14 02:23:01.995241] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:25:59.757 00:25:59.757 filename0: (groupid=0, jobs=1): err= 0: pid=89597: Tue May 14 02:23:12 2024 00:25:59.757 read: IOPS=249, BW=997KiB/s (1021kB/s)(9.78MiB/10050msec) 00:25:59.757 slat (usec): min=4, max=8029, avg=23.55, stdev=320.03 00:25:59.757 clat (msec): min=20, max=144, avg=63.99, stdev=19.85 00:25:59.757 lat (msec): min=20, max=144, avg=64.02, stdev=19.85 00:25:59.757 clat percentiles (msec): 00:25:59.757 | 1.00th=[ 25], 5.00th=[ 36], 10.00th=[ 40], 20.00th=[ 48], 00:25:59.757 | 30.00th=[ 51], 40.00th=[ 60], 50.00th=[ 61], 60.00th=[ 71], 00:25:59.757 | 70.00th=[ 72], 80.00th=[ 77], 90.00th=[ 87], 95.00th=[ 96], 00:25:59.757 | 99.00th=[ 123], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 144], 00:25:59.757 | 99.99th=[ 144] 00:25:59.757 bw ( KiB/s): min= 768, max= 1168, per=4.56%, avg=995.30, stdev=139.64, samples=20 00:25:59.757 iops : min= 192, max= 292, avg=248.80, stdev=34.88, samples=20 00:25:59.757 lat (msec) : 50=29.79%, 100=66.13%, 250=4.07% 00:25:59.757 cpu : usr=32.34%, sys=0.90%, ctx=882, majf=0, minf=9 00:25:59.757 IO depths : 1=0.4%, 2=1.2%, 4=6.7%, 8=77.8%, 16=13.9%, 32=0.0%, >=64=0.0% 00:25:59.757 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.757 complete : 0=0.0%, 4=89.4%, 8=6.9%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.757 issued rwts: total=2504,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:59.757 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:59.757 filename0: (groupid=0, jobs=1): err= 0: pid=89598: Tue May 14 02:23:12 2024 00:25:59.757 read: IOPS=217, BW=872KiB/s (893kB/s)(8748KiB/10035msec) 00:25:59.757 slat (usec): min=4, max=8025, avg=22.28, stdev=296.69 00:25:59.757 clat (msec): min=33, max=169, avg=73.23, stdev=22.19 00:25:59.757 lat (msec): min=33, max=169, avg=73.25, stdev=22.19 00:25:59.757 clat percentiles (msec): 00:25:59.757 | 1.00th=[ 36], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 52], 00:25:59.757 | 30.00th=[ 61], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 73], 00:25:59.757 | 70.00th=[ 84], 80.00th=[ 87], 90.00th=[ 106], 95.00th=[ 112], 00:25:59.757 | 99.00th=[ 144], 99.50th=[ 155], 99.90th=[ 171], 99.95th=[ 171], 00:25:59.757 | 99.99th=[ 171] 00:25:59.758 bw ( KiB/s): min= 641, max= 1152, per=3.98%, avg=868.45, stdev=119.59, samples=20 00:25:59.758 iops : min= 160, max= 288, avg=217.10, stdev=29.92, samples=20 00:25:59.758 lat (msec) : 50=17.65%, 100=71.06%, 250=11.29% 00:25:59.758 cpu : usr=34.19%, sys=1.05%, ctx=890, majf=0, minf=9 00:25:59.758 IO depths : 1=1.6%, 2=3.6%, 4=11.7%, 8=71.5%, 16=11.7%, 32=0.0%, >=64=0.0% 00:25:59.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.758 complete : 0=0.0%, 4=90.5%, 8=4.5%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.758 issued rwts: total=2187,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:59.758 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:59.758 filename0: (groupid=0, jobs=1): err= 0: pid=89599: Tue May 14 02:23:12 2024 00:25:59.758 read: IOPS=208, BW=835KiB/s (855kB/s)(8368KiB/10021msec) 00:25:59.758 slat (usec): min=3, max=10020, avg=23.58, stdev=330.42 00:25:59.758 clat (msec): min=31, max=161, avg=76.39, stdev=20.82 00:25:59.758 lat (msec): min=31, max=161, avg=76.42, stdev=20.83 00:25:59.758 clat percentiles (msec): 00:25:59.758 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 50], 20.00th=[ 61], 00:25:59.758 | 30.00th=[ 63], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 81], 00:25:59.758 | 70.00th=[ 85], 80.00th=[ 95], 90.00th=[ 107], 95.00th=[ 111], 00:25:59.758 | 99.00th=[ 134], 99.50th=[ 144], 99.90th=[ 163], 99.95th=[ 163], 00:25:59.758 | 99.99th=[ 163] 00:25:59.758 bw ( KiB/s): min= 640, max= 952, per=3.82%, avg=832.20, stdev=71.55, samples=20 00:25:59.758 iops : min= 160, max= 238, avg=208.00, stdev=17.86, samples=20 00:25:59.758 lat (msec) : 50=10.09%, 100=77.72%, 250=12.19% 00:25:59.758 cpu : usr=32.40%, sys=0.77%, ctx=937, majf=0, minf=9 00:25:59.758 IO depths : 1=1.9%, 2=4.4%, 4=13.3%, 8=69.1%, 16=11.2%, 32=0.0%, >=64=0.0% 00:25:59.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.758 complete : 0=0.0%, 4=91.0%, 8=4.0%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.758 issued rwts: total=2092,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:59.758 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:59.758 filename0: (groupid=0, jobs=1): err= 0: pid=89600: Tue May 14 02:23:12 2024 00:25:59.758 read: IOPS=203, BW=813KiB/s (832kB/s)(8128KiB/10002msec) 00:25:59.758 slat (nsec): min=6930, max=31207, avg=11611.03, stdev=3991.39 00:25:59.758 clat (usec): min=1727, max=156469, avg=78679.31, stdev=24397.65 00:25:59.758 lat (usec): min=1735, max=156484, avg=78690.92, stdev=24397.73 00:25:59.758 clat percentiles (msec): 00:25:59.758 | 1.00th=[ 4], 5.00th=[ 47], 10.00th=[ 49], 20.00th=[ 61], 00:25:59.758 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 84], 00:25:59.758 | 70.00th=[ 88], 80.00th=[ 96], 90.00th=[ 109], 95.00th=[ 121], 00:25:59.758 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 157], 99.95th=[ 157], 00:25:59.758 | 99.99th=[ 157] 00:25:59.758 bw ( KiB/s): min= 640, max= 1152, per=3.63%, avg=792.47, stdev=123.34, samples=19 00:25:59.758 iops : min= 160, max= 288, avg=198.11, stdev=30.84, samples=19 00:25:59.758 lat (msec) : 2=0.79%, 4=0.89%, 20=0.69%, 50=7.97%, 100=74.41% 00:25:59.758 lat (msec) : 250=15.26% 00:25:59.758 cpu : usr=32.12%, sys=0.97%, ctx=914, majf=0, minf=9 00:25:59.758 IO depths : 1=1.6%, 2=3.7%, 4=13.6%, 8=69.4%, 16=11.7%, 32=0.0%, >=64=0.0% 00:25:59.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.758 complete : 0=0.0%, 4=90.5%, 8=4.5%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.758 issued rwts: total=2032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:59.758 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:59.758 filename0: (groupid=0, jobs=1): err= 0: pid=89601: Tue May 14 02:23:12 2024 00:25:59.758 read: IOPS=251, BW=1006KiB/s (1030kB/s)(9.85MiB/10030msec) 00:25:59.758 slat (usec): min=4, max=8020, avg=15.68, stdev=178.35 00:25:59.758 clat (msec): min=26, max=162, avg=63.50, stdev=20.36 00:25:59.758 lat (msec): min=26, max=162, avg=63.51, stdev=20.36 00:25:59.758 clat percentiles (msec): 00:25:59.758 | 1.00th=[ 31], 5.00th=[ 39], 10.00th=[ 42], 20.00th=[ 47], 00:25:59.758 | 30.00th=[ 50], 40.00th=[ 56], 50.00th=[ 62], 60.00th=[ 66], 00:25:59.758 | 70.00th=[ 72], 80.00th=[ 81], 90.00th=[ 90], 95.00th=[ 99], 00:25:59.758 | 99.00th=[ 129], 99.50th=[ 144], 99.90th=[ 163], 99.95th=[ 163], 00:25:59.758 | 99.99th=[ 163] 00:25:59.758 bw ( KiB/s): min= 688, max= 1296, per=4.60%, avg=1004.55, stdev=163.26, samples=20 00:25:59.758 iops : min= 172, max= 324, avg=251.10, stdev=40.86, samples=20 00:25:59.758 lat (msec) : 50=33.19%, 100=62.25%, 250=4.56% 00:25:59.758 cpu : usr=43.14%, sys=0.97%, ctx=1278, majf=0, minf=9 00:25:59.758 IO depths : 1=0.4%, 2=1.0%, 4=8.2%, 8=77.1%, 16=13.4%, 32=0.0%, >=64=0.0% 00:25:59.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.758 complete : 0=0.0%, 4=89.6%, 8=6.2%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.758 issued rwts: total=2522,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:59.758 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:59.758 filename0: (groupid=0, jobs=1): err= 0: pid=89602: Tue May 14 02:23:12 2024 00:25:59.758 read: IOPS=204, BW=817KiB/s (837kB/s)(8180KiB/10013msec) 00:25:59.758 slat (usec): min=4, max=4022, avg=17.51, stdev=153.55 00:25:59.758 clat (msec): min=30, max=136, avg=78.19, stdev=18.84 00:25:59.758 lat (msec): min=30, max=136, avg=78.21, stdev=18.85 00:25:59.758 clat percentiles (msec): 00:25:59.758 | 1.00th=[ 37], 5.00th=[ 48], 10.00th=[ 61], 20.00th=[ 65], 00:25:59.758 | 30.00th=[ 69], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 81], 00:25:59.758 | 70.00th=[ 88], 80.00th=[ 96], 90.00th=[ 105], 95.00th=[ 112], 00:25:59.758 | 99.00th=[ 130], 99.50th=[ 134], 99.90th=[ 136], 99.95th=[ 136], 00:25:59.758 | 99.99th=[ 136] 00:25:59.758 bw ( KiB/s): min= 752, max= 952, per=3.73%, avg=813.89, stdev=71.05, samples=19 00:25:59.758 iops : min= 188, max= 238, avg=203.47, stdev=17.76, samples=19 00:25:59.758 lat (msec) : 50=6.85%, 100=81.47%, 250=11.69% 00:25:59.758 cpu : usr=45.07%, sys=1.16%, ctx=1163, majf=0, minf=9 00:25:59.758 IO depths : 1=3.5%, 2=7.6%, 4=18.2%, 8=61.6%, 16=9.1%, 32=0.0%, >=64=0.0% 00:25:59.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.758 complete : 0=0.0%, 4=92.2%, 8=2.2%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.758 issued rwts: total=2045,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:59.758 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:59.758 filename0: (groupid=0, jobs=1): err= 0: pid=89603: Tue May 14 02:23:12 2024 00:25:59.758 read: IOPS=200, BW=800KiB/s (820kB/s)(8012KiB/10011msec) 00:25:59.758 slat (nsec): min=6171, max=92241, avg=11863.33, stdev=4527.17 00:25:59.758 clat (msec): min=10, max=154, avg=79.83, stdev=21.19 00:25:59.758 lat (msec): min=10, max=154, avg=79.84, stdev=21.19 00:25:59.758 clat percentiles (msec): 00:25:59.758 | 1.00th=[ 35], 5.00th=[ 48], 10.00th=[ 57], 20.00th=[ 64], 00:25:59.758 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 85], 00:25:59.758 | 70.00th=[ 95], 80.00th=[ 97], 90.00th=[ 108], 95.00th=[ 120], 00:25:59.758 | 99.00th=[ 131], 99.50th=[ 132], 99.90th=[ 155], 99.95th=[ 155], 00:25:59.758 | 99.99th=[ 155] 00:25:59.758 bw ( KiB/s): min= 640, max= 1024, per=3.64%, avg=793.68, stdev=89.51, samples=19 00:25:59.758 iops : min= 160, max= 256, avg=198.42, stdev=22.38, samples=19 00:25:59.758 lat (msec) : 20=0.30%, 50=8.39%, 100=74.84%, 250=16.48% 00:25:59.758 cpu : usr=33.03%, sys=0.84%, ctx=923, majf=0, minf=9 00:25:59.758 IO depths : 1=2.2%, 2=5.0%, 4=14.6%, 8=67.4%, 16=10.8%, 32=0.0%, >=64=0.0% 00:25:59.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.758 complete : 0=0.0%, 4=91.1%, 8=3.6%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.758 issued rwts: total=2003,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:59.758 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:59.758 filename0: (groupid=0, jobs=1): err= 0: pid=89604: Tue May 14 02:23:12 2024 00:25:59.758 read: IOPS=210, BW=844KiB/s (864kB/s)(8452KiB/10020msec) 00:25:59.758 slat (usec): min=4, max=8016, avg=21.61, stdev=253.54 00:25:59.758 clat (msec): min=23, max=165, avg=75.68, stdev=21.60 00:25:59.759 lat (msec): min=23, max=165, avg=75.70, stdev=21.59 00:25:59.759 clat percentiles (msec): 00:25:59.759 | 1.00th=[ 29], 5.00th=[ 46], 10.00th=[ 50], 20.00th=[ 61], 00:25:59.759 | 30.00th=[ 65], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 77], 00:25:59.759 | 70.00th=[ 84], 80.00th=[ 95], 90.00th=[ 104], 95.00th=[ 114], 00:25:59.759 | 99.00th=[ 140], 99.50th=[ 155], 99.90th=[ 165], 99.95th=[ 165], 00:25:59.759 | 99.99th=[ 165] 00:25:59.759 bw ( KiB/s): min= 688, max= 1024, per=3.84%, avg=838.55, stdev=93.08, samples=20 00:25:59.759 iops : min= 172, max= 256, avg=209.60, stdev=23.30, samples=20 00:25:59.759 lat (msec) : 50=10.74%, 100=77.38%, 250=11.88% 00:25:59.759 cpu : usr=43.14%, sys=1.01%, ctx=1279, majf=0, minf=9 00:25:59.759 IO depths : 1=2.4%, 2=5.3%, 4=14.5%, 8=66.8%, 16=10.9%, 32=0.0%, >=64=0.0% 00:25:59.759 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.759 complete : 0=0.0%, 4=91.3%, 8=3.8%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.759 issued rwts: total=2113,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:59.759 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:59.759 filename1: (groupid=0, jobs=1): err= 0: pid=89605: Tue May 14 02:23:12 2024 00:25:59.759 read: IOPS=243, BW=974KiB/s (997kB/s)(9784KiB/10049msec) 00:25:59.759 slat (usec): min=4, max=8034, avg=21.00, stdev=256.29 00:25:59.759 clat (msec): min=18, max=154, avg=65.59, stdev=21.85 00:25:59.759 lat (msec): min=18, max=154, avg=65.61, stdev=21.86 00:25:59.759 clat percentiles (msec): 00:25:59.759 | 1.00th=[ 30], 5.00th=[ 39], 10.00th=[ 42], 20.00th=[ 48], 00:25:59.759 | 30.00th=[ 52], 40.00th=[ 59], 50.00th=[ 63], 60.00th=[ 70], 00:25:59.759 | 70.00th=[ 72], 80.00th=[ 81], 90.00th=[ 90], 95.00th=[ 108], 00:25:59.759 | 99.00th=[ 144], 99.50th=[ 146], 99.90th=[ 155], 99.95th=[ 155], 00:25:59.759 | 99.99th=[ 155] 00:25:59.759 bw ( KiB/s): min= 728, max= 1328, per=4.45%, avg=972.00, stdev=163.26, samples=20 00:25:59.759 iops : min= 182, max= 332, avg=243.00, stdev=40.82, samples=20 00:25:59.759 lat (msec) : 20=0.65%, 50=27.19%, 100=66.56%, 250=5.60% 00:25:59.759 cpu : usr=40.00%, sys=1.04%, ctx=1034, majf=0, minf=10 00:25:59.759 IO depths : 1=1.1%, 2=2.5%, 4=9.3%, 8=74.7%, 16=12.4%, 32=0.0%, >=64=0.0% 00:25:59.759 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.759 complete : 0=0.0%, 4=89.8%, 8=5.7%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.759 issued rwts: total=2446,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:59.759 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:59.759 filename1: (groupid=0, jobs=1): err= 0: pid=89606: Tue May 14 02:23:12 2024 00:25:59.759 read: IOPS=226, BW=905KiB/s (926kB/s)(9076KiB/10032msec) 00:25:59.759 slat (usec): min=4, max=5021, avg=14.86, stdev=122.74 00:25:59.759 clat (msec): min=23, max=130, avg=70.60, stdev=20.03 00:25:59.759 lat (msec): min=23, max=130, avg=70.61, stdev=20.03 00:25:59.759 clat percentiles (msec): 00:25:59.759 | 1.00th=[ 34], 5.00th=[ 41], 10.00th=[ 45], 20.00th=[ 53], 00:25:59.759 | 30.00th=[ 61], 40.00th=[ 66], 50.00th=[ 70], 60.00th=[ 72], 00:25:59.759 | 70.00th=[ 81], 80.00th=[ 88], 90.00th=[ 97], 95.00th=[ 105], 00:25:59.759 | 99.00th=[ 126], 99.50th=[ 128], 99.90th=[ 131], 99.95th=[ 131], 00:25:59.759 | 99.99th=[ 131] 00:25:59.759 bw ( KiB/s): min= 640, max= 1250, per=4.13%, avg=901.30, stdev=136.23, samples=20 00:25:59.759 iops : min= 160, max= 312, avg=225.20, stdev=33.97, samples=20 00:25:59.759 lat (msec) : 50=16.13%, 100=76.91%, 250=6.96% 00:25:59.759 cpu : usr=44.56%, sys=1.05%, ctx=1458, majf=0, minf=9 00:25:59.759 IO depths : 1=1.6%, 2=3.5%, 4=11.5%, 8=71.7%, 16=11.7%, 32=0.0%, >=64=0.0% 00:25:59.759 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.759 complete : 0=0.0%, 4=90.5%, 8=4.7%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.759 issued rwts: total=2269,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:59.759 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:59.759 filename1: (groupid=0, jobs=1): err= 0: pid=89607: Tue May 14 02:23:12 2024 00:25:59.759 read: IOPS=275, BW=1102KiB/s (1129kB/s)(10.8MiB/10058msec) 00:25:59.759 slat (usec): min=6, max=8023, avg=15.02, stdev=170.17 00:25:59.759 clat (usec): min=1349, max=144645, avg=57860.96, stdev=21672.50 00:25:59.759 lat (usec): min=1363, max=144659, avg=57875.98, stdev=21671.98 00:25:59.759 clat percentiles (usec): 00:25:59.759 | 1.00th=[ 1745], 5.00th=[ 31589], 10.00th=[ 35914], 20.00th=[ 42730], 00:25:59.759 | 30.00th=[ 47449], 40.00th=[ 49021], 50.00th=[ 55837], 60.00th=[ 61080], 00:25:59.759 | 70.00th=[ 68682], 80.00th=[ 71828], 90.00th=[ 84411], 95.00th=[ 95945], 00:25:59.759 | 99.00th=[121111], 99.50th=[122160], 99.90th=[143655], 99.95th=[143655], 00:25:59.759 | 99.99th=[143655] 00:25:59.759 bw ( KiB/s): min= 776, max= 2152, per=5.05%, avg=1102.15, stdev=286.26, samples=20 00:25:59.759 iops : min= 194, max= 538, avg=275.50, stdev=71.56, samples=20 00:25:59.759 lat (msec) : 2=1.73%, 4=0.51%, 10=1.30%, 20=0.51%, 50=36.62% 00:25:59.759 lat (msec) : 100=56.02%, 250=3.32% 00:25:59.759 cpu : usr=44.01%, sys=1.05%, ctx=1524, majf=0, minf=9 00:25:59.759 IO depths : 1=1.1%, 2=2.6%, 4=10.1%, 8=73.8%, 16=12.3%, 32=0.0%, >=64=0.0% 00:25:59.759 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.759 complete : 0=0.0%, 4=90.0%, 8=5.4%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.759 issued rwts: total=2772,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:59.759 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:59.759 filename1: (groupid=0, jobs=1): err= 0: pid=89608: Tue May 14 02:23:12 2024 00:25:59.759 read: IOPS=219, BW=880KiB/s (901kB/s)(8828KiB/10036msec) 00:25:59.759 slat (nsec): min=3759, max=84289, avg=11699.15, stdev=4776.07 00:25:59.759 clat (msec): min=32, max=138, avg=72.57, stdev=18.48 00:25:59.759 lat (msec): min=32, max=138, avg=72.58, stdev=18.48 00:25:59.759 clat percentiles (msec): 00:25:59.759 | 1.00th=[ 36], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 59], 00:25:59.759 | 30.00th=[ 64], 40.00th=[ 67], 50.00th=[ 71], 60.00th=[ 74], 00:25:59.759 | 70.00th=[ 82], 80.00th=[ 88], 90.00th=[ 100], 95.00th=[ 111], 00:25:59.759 | 99.00th=[ 116], 99.50th=[ 121], 99.90th=[ 140], 99.95th=[ 140], 00:25:59.759 | 99.99th=[ 140] 00:25:59.759 bw ( KiB/s): min= 640, max= 1200, per=4.02%, avg=876.35, stdev=117.38, samples=20 00:25:59.759 iops : min= 160, max= 300, avg=219.05, stdev=29.37, samples=20 00:25:59.759 lat (msec) : 50=11.92%, 100=79.75%, 250=8.34% 00:25:59.759 cpu : usr=42.17%, sys=0.94%, ctx=1398, majf=0, minf=9 00:25:59.759 IO depths : 1=1.4%, 2=3.3%, 4=10.7%, 8=72.3%, 16=12.2%, 32=0.0%, >=64=0.0% 00:25:59.759 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.759 complete : 0=0.0%, 4=90.4%, 8=5.1%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.759 issued rwts: total=2207,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:59.759 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:59.759 filename1: (groupid=0, jobs=1): err= 0: pid=89609: Tue May 14 02:23:12 2024 00:25:59.759 read: IOPS=254, BW=1017KiB/s (1042kB/s)(9.96MiB/10028msec) 00:25:59.759 slat (usec): min=4, max=7819, avg=20.69, stdev=229.66 00:25:59.759 clat (msec): min=20, max=140, avg=62.81, stdev=19.56 00:25:59.759 lat (msec): min=20, max=140, avg=62.83, stdev=19.57 00:25:59.759 clat percentiles (msec): 00:25:59.759 | 1.00th=[ 32], 5.00th=[ 40], 10.00th=[ 42], 20.00th=[ 46], 00:25:59.759 | 30.00th=[ 50], 40.00th=[ 55], 50.00th=[ 61], 60.00th=[ 66], 00:25:59.759 | 70.00th=[ 70], 80.00th=[ 77], 90.00th=[ 88], 95.00th=[ 101], 00:25:59.759 | 99.00th=[ 127], 99.50th=[ 131], 99.90th=[ 142], 99.95th=[ 142], 00:25:59.759 | 99.99th=[ 142] 00:25:59.759 bw ( KiB/s): min= 736, max= 1280, per=4.65%, avg=1013.50, stdev=138.24, samples=20 00:25:59.759 iops : min= 184, max= 320, avg=253.35, stdev=34.57, samples=20 00:25:59.759 lat (msec) : 50=32.04%, 100=62.67%, 250=5.29% 00:25:59.759 cpu : usr=43.22%, sys=0.94%, ctx=1344, majf=0, minf=9 00:25:59.759 IO depths : 1=1.0%, 2=2.1%, 4=8.1%, 8=75.6%, 16=13.1%, 32=0.0%, >=64=0.0% 00:25:59.759 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.759 complete : 0=0.0%, 4=89.9%, 8=6.1%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.759 issued rwts: total=2550,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:59.759 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:59.759 filename1: (groupid=0, jobs=1): err= 0: pid=89610: Tue May 14 02:23:12 2024 00:25:59.759 read: IOPS=218, BW=875KiB/s (896kB/s)(8772KiB/10020msec) 00:25:59.759 slat (usec): min=4, max=8024, avg=21.22, stdev=228.84 00:25:59.759 clat (msec): min=20, max=146, avg=72.98, stdev=20.28 00:25:59.759 lat (msec): min=20, max=146, avg=73.00, stdev=20.28 00:25:59.759 clat percentiles (msec): 00:25:59.759 | 1.00th=[ 26], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 56], 00:25:59.759 | 30.00th=[ 63], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 75], 00:25:59.759 | 70.00th=[ 84], 80.00th=[ 93], 90.00th=[ 99], 95.00th=[ 107], 00:25:59.759 | 99.00th=[ 124], 99.50th=[ 138], 99.90th=[ 146], 99.95th=[ 146], 00:25:59.759 | 99.99th=[ 146] 00:25:59.759 bw ( KiB/s): min= 640, max= 1168, per=3.99%, avg=870.80, stdev=130.20, samples=20 00:25:59.759 iops : min= 160, max= 292, avg=217.70, stdev=32.55, samples=20 00:25:59.759 lat (msec) : 50=14.04%, 100=77.79%, 250=8.16% 00:25:59.759 cpu : usr=43.26%, sys=0.79%, ctx=1255, majf=0, minf=9 00:25:59.759 IO depths : 1=1.8%, 2=3.9%, 4=12.2%, 8=70.5%, 16=11.6%, 32=0.0%, >=64=0.0% 00:25:59.759 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.759 complete : 0=0.0%, 4=90.2%, 8=5.1%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.759 issued rwts: total=2193,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:59.759 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:59.759 filename1: (groupid=0, jobs=1): err= 0: pid=89611: Tue May 14 02:23:12 2024 00:25:59.759 read: IOPS=251, BW=1004KiB/s (1028kB/s)(9.84MiB/10031msec) 00:25:59.760 slat (usec): min=4, max=5018, avg=16.19, stdev=150.76 00:25:59.760 clat (msec): min=25, max=137, avg=63.55, stdev=19.80 00:25:59.760 lat (msec): min=25, max=137, avg=63.57, stdev=19.81 00:25:59.760 clat percentiles (msec): 00:25:59.760 | 1.00th=[ 30], 5.00th=[ 37], 10.00th=[ 42], 20.00th=[ 46], 00:25:59.760 | 30.00th=[ 48], 40.00th=[ 57], 50.00th=[ 62], 60.00th=[ 68], 00:25:59.760 | 70.00th=[ 72], 80.00th=[ 80], 90.00th=[ 92], 95.00th=[ 97], 00:25:59.760 | 99.00th=[ 122], 99.50th=[ 126], 99.90th=[ 138], 99.95th=[ 138], 00:25:59.760 | 99.99th=[ 138] 00:25:59.760 bw ( KiB/s): min= 792, max= 1304, per=4.59%, avg=1000.80, stdev=153.45, samples=20 00:25:59.760 iops : min= 198, max= 326, avg=250.15, stdev=38.36, samples=20 00:25:59.760 lat (msec) : 50=33.64%, 100=62.59%, 250=3.77% 00:25:59.760 cpu : usr=40.05%, sys=1.01%, ctx=1324, majf=0, minf=9 00:25:59.760 IO depths : 1=0.6%, 2=1.4%, 4=6.8%, 8=77.7%, 16=13.5%, 32=0.0%, >=64=0.0% 00:25:59.760 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.760 complete : 0=0.0%, 4=89.4%, 8=6.5%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.760 issued rwts: total=2518,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:59.760 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:59.760 filename1: (groupid=0, jobs=1): err= 0: pid=89612: Tue May 14 02:23:12 2024 00:25:59.760 read: IOPS=206, BW=827KiB/s (847kB/s)(8300KiB/10031msec) 00:25:59.760 slat (usec): min=4, max=249, avg=11.69, stdev= 7.88 00:25:59.760 clat (msec): min=31, max=158, avg=77.26, stdev=19.03 00:25:59.760 lat (msec): min=31, max=158, avg=77.28, stdev=19.03 00:25:59.760 clat percentiles (msec): 00:25:59.760 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 57], 20.00th=[ 62], 00:25:59.760 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 83], 00:25:59.760 | 70.00th=[ 86], 80.00th=[ 95], 90.00th=[ 104], 95.00th=[ 111], 00:25:59.760 | 99.00th=[ 125], 99.50th=[ 131], 99.90th=[ 159], 99.95th=[ 159], 00:25:59.760 | 99.99th=[ 159] 00:25:59.760 bw ( KiB/s): min= 640, max= 1010, per=3.77%, avg=823.65, stdev=90.17, samples=20 00:25:59.760 iops : min= 160, max= 252, avg=205.80, stdev=22.51, samples=20 00:25:59.760 lat (msec) : 50=6.70%, 100=82.27%, 250=11.04% 00:25:59.760 cpu : usr=33.56%, sys=0.80%, ctx=999, majf=0, minf=9 00:25:59.760 IO depths : 1=2.1%, 2=4.8%, 4=14.2%, 8=67.9%, 16=11.0%, 32=0.0%, >=64=0.0% 00:25:59.760 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.760 complete : 0=0.0%, 4=91.2%, 8=3.6%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.760 issued rwts: total=2075,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:59.760 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:59.760 filename2: (groupid=0, jobs=1): err= 0: pid=89613: Tue May 14 02:23:12 2024 00:25:59.760 read: IOPS=257, BW=1032KiB/s (1056kB/s)(10.1MiB/10043msec) 00:25:59.760 slat (usec): min=4, max=4021, avg=13.75, stdev=101.29 00:25:59.760 clat (msec): min=21, max=137, avg=61.91, stdev=18.94 00:25:59.760 lat (msec): min=21, max=137, avg=61.93, stdev=18.94 00:25:59.760 clat percentiles (msec): 00:25:59.760 | 1.00th=[ 27], 5.00th=[ 37], 10.00th=[ 41], 20.00th=[ 47], 00:25:59.760 | 30.00th=[ 48], 40.00th=[ 54], 50.00th=[ 61], 60.00th=[ 65], 00:25:59.760 | 70.00th=[ 70], 80.00th=[ 77], 90.00th=[ 89], 95.00th=[ 97], 00:25:59.760 | 99.00th=[ 115], 99.50th=[ 121], 99.90th=[ 138], 99.95th=[ 138], 00:25:59.760 | 99.99th=[ 138] 00:25:59.760 bw ( KiB/s): min= 728, max= 1376, per=4.72%, avg=1029.60, stdev=164.67, samples=20 00:25:59.760 iops : min= 182, max= 344, avg=257.40, stdev=41.17, samples=20 00:25:59.760 lat (msec) : 50=35.02%, 100=61.43%, 250=3.55% 00:25:59.760 cpu : usr=43.78%, sys=1.12%, ctx=1240, majf=0, minf=9 00:25:59.760 IO depths : 1=0.7%, 2=1.7%, 4=9.0%, 8=76.0%, 16=12.6%, 32=0.0%, >=64=0.0% 00:25:59.760 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.760 complete : 0=0.0%, 4=89.6%, 8=5.7%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.760 issued rwts: total=2590,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:59.760 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:59.760 filename2: (groupid=0, jobs=1): err= 0: pid=89614: Tue May 14 02:23:12 2024 00:25:59.760 read: IOPS=218, BW=873KiB/s (894kB/s)(8764KiB/10038msec) 00:25:59.760 slat (usec): min=4, max=8029, avg=18.41, stdev=242.07 00:25:59.760 clat (msec): min=33, max=141, avg=73.15, stdev=19.87 00:25:59.760 lat (msec): min=33, max=141, avg=73.17, stdev=19.86 00:25:59.760 clat percentiles (msec): 00:25:59.760 | 1.00th=[ 36], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 61], 00:25:59.760 | 30.00th=[ 61], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 72], 00:25:59.760 | 70.00th=[ 84], 80.00th=[ 87], 90.00th=[ 97], 95.00th=[ 108], 00:25:59.760 | 99.00th=[ 132], 99.50th=[ 132], 99.90th=[ 142], 99.95th=[ 142], 00:25:59.760 | 99.99th=[ 142] 00:25:59.760 bw ( KiB/s): min= 688, max= 1120, per=3.98%, avg=869.75, stdev=113.29, samples=20 00:25:59.760 iops : min= 172, max= 280, avg=217.40, stdev=28.36, samples=20 00:25:59.760 lat (msec) : 50=14.97%, 100=76.59%, 250=8.44% 00:25:59.760 cpu : usr=32.37%, sys=0.77%, ctx=954, majf=0, minf=9 00:25:59.760 IO depths : 1=1.0%, 2=2.4%, 4=9.8%, 8=74.2%, 16=12.6%, 32=0.0%, >=64=0.0% 00:25:59.760 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.760 complete : 0=0.0%, 4=89.8%, 8=5.8%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.760 issued rwts: total=2191,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:59.760 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:59.760 filename2: (groupid=0, jobs=1): err= 0: pid=89615: Tue May 14 02:23:12 2024 00:25:59.760 read: IOPS=219, BW=879KiB/s (900kB/s)(8824KiB/10037msec) 00:25:59.760 slat (usec): min=6, max=8025, avg=27.46, stdev=351.40 00:25:59.760 clat (msec): min=32, max=145, avg=72.59, stdev=21.18 00:25:59.760 lat (msec): min=32, max=145, avg=72.62, stdev=21.17 00:25:59.760 clat percentiles (msec): 00:25:59.760 | 1.00th=[ 36], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 56], 00:25:59.760 | 30.00th=[ 61], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 72], 00:25:59.760 | 70.00th=[ 84], 80.00th=[ 90], 90.00th=[ 99], 95.00th=[ 112], 00:25:59.760 | 99.00th=[ 132], 99.50th=[ 132], 99.90th=[ 146], 99.95th=[ 146], 00:25:59.760 | 99.99th=[ 146] 00:25:59.760 bw ( KiB/s): min= 672, max= 1040, per=4.02%, avg=876.05, stdev=114.38, samples=20 00:25:59.760 iops : min= 168, max= 260, avg=219.00, stdev=28.61, samples=20 00:25:59.760 lat (msec) : 50=18.18%, 100=72.67%, 250=9.16% 00:25:59.760 cpu : usr=32.41%, sys=0.68%, ctx=899, majf=0, minf=9 00:25:59.760 IO depths : 1=1.0%, 2=2.1%, 4=8.2%, 8=75.5%, 16=13.2%, 32=0.0%, >=64=0.0% 00:25:59.760 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.760 complete : 0=0.0%, 4=89.7%, 8=6.2%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.760 issued rwts: total=2206,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:59.760 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:59.760 filename2: (groupid=0, jobs=1): err= 0: pid=89616: Tue May 14 02:23:12 2024 00:25:59.760 read: IOPS=244, BW=976KiB/s (1000kB/s)(9808KiB/10045msec) 00:25:59.760 slat (usec): min=4, max=354, avg=11.16, stdev= 7.88 00:25:59.760 clat (msec): min=5, max=144, avg=65.42, stdev=21.82 00:25:59.760 lat (msec): min=5, max=144, avg=65.43, stdev=21.82 00:25:59.760 clat percentiles (msec): 00:25:59.760 | 1.00th=[ 7], 5.00th=[ 38], 10.00th=[ 44], 20.00th=[ 48], 00:25:59.760 | 30.00th=[ 56], 40.00th=[ 61], 50.00th=[ 63], 60.00th=[ 69], 00:25:59.760 | 70.00th=[ 72], 80.00th=[ 82], 90.00th=[ 96], 95.00th=[ 109], 00:25:59.760 | 99.00th=[ 132], 99.50th=[ 132], 99.90th=[ 144], 99.95th=[ 144], 00:25:59.760 | 99.99th=[ 144] 00:25:59.760 bw ( KiB/s): min= 768, max= 1456, per=4.47%, avg=974.40, stdev=166.84, samples=20 00:25:59.760 iops : min= 192, max= 364, avg=243.60, stdev=41.71, samples=20 00:25:59.760 lat (msec) : 10=1.31%, 20=0.65%, 50=24.51%, 100=65.86%, 250=7.67% 00:25:59.760 cpu : usr=34.91%, sys=0.77%, ctx=968, majf=0, minf=9 00:25:59.760 IO depths : 1=1.2%, 2=2.4%, 4=9.5%, 8=74.6%, 16=12.3%, 32=0.0%, >=64=0.0% 00:25:59.760 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.760 complete : 0=0.0%, 4=89.9%, 8=5.6%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.760 issued rwts: total=2452,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:59.760 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:59.760 filename2: (groupid=0, jobs=1): err= 0: pid=89617: Tue May 14 02:23:12 2024 00:25:59.760 read: IOPS=249, BW=997KiB/s (1021kB/s)(9.78MiB/10048msec) 00:25:59.760 slat (usec): min=6, max=8022, avg=20.86, stdev=277.16 00:25:59.760 clat (msec): min=7, max=131, avg=64.01, stdev=20.11 00:25:59.760 lat (msec): min=7, max=131, avg=64.03, stdev=20.12 00:25:59.760 clat percentiles (msec): 00:25:59.760 | 1.00th=[ 15], 5.00th=[ 36], 10.00th=[ 44], 20.00th=[ 48], 00:25:59.760 | 30.00th=[ 51], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 69], 00:25:59.760 | 70.00th=[ 72], 80.00th=[ 84], 90.00th=[ 94], 95.00th=[ 99], 00:25:59.760 | 99.00th=[ 121], 99.50th=[ 124], 99.90th=[ 132], 99.95th=[ 132], 00:25:59.760 | 99.99th=[ 132] 00:25:59.760 bw ( KiB/s): min= 776, max= 1160, per=4.56%, avg=995.20, stdev=122.19, samples=20 00:25:59.760 iops : min= 194, max= 290, avg=248.80, stdev=30.55, samples=20 00:25:59.760 lat (msec) : 10=0.56%, 20=0.64%, 50=28.91%, 100=65.14%, 250=4.75% 00:25:59.760 cpu : usr=33.17%, sys=0.96%, ctx=948, majf=0, minf=9 00:25:59.760 IO depths : 1=0.5%, 2=1.3%, 4=6.7%, 8=77.9%, 16=13.6%, 32=0.0%, >=64=0.0% 00:25:59.760 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.760 complete : 0=0.0%, 4=89.4%, 8=6.7%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.760 issued rwts: total=2504,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:59.760 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:59.760 filename2: (groupid=0, jobs=1): err= 0: pid=89618: Tue May 14 02:23:12 2024 00:25:59.760 read: IOPS=230, BW=921KiB/s (943kB/s)(9252KiB/10050msec) 00:25:59.760 slat (usec): min=3, max=8021, avg=14.27, stdev=166.60 00:25:59.760 clat (msec): min=20, max=155, avg=69.32, stdev=20.16 00:25:59.760 lat (msec): min=20, max=155, avg=69.34, stdev=20.17 00:25:59.760 clat percentiles (msec): 00:25:59.761 | 1.00th=[ 36], 5.00th=[ 40], 10.00th=[ 47], 20.00th=[ 49], 00:25:59.761 | 30.00th=[ 61], 40.00th=[ 63], 50.00th=[ 71], 60.00th=[ 72], 00:25:59.761 | 70.00th=[ 77], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 106], 00:25:59.761 | 99.00th=[ 132], 99.50th=[ 133], 99.90th=[ 157], 99.95th=[ 157], 00:25:59.761 | 99.99th=[ 157] 00:25:59.761 bw ( KiB/s): min= 736, max= 1120, per=4.21%, avg=918.80, stdev=109.48, samples=20 00:25:59.761 iops : min= 184, max= 280, avg=229.70, stdev=27.37, samples=20 00:25:59.761 lat (msec) : 50=22.14%, 100=71.55%, 250=6.31% 00:25:59.761 cpu : usr=32.33%, sys=0.86%, ctx=950, majf=0, minf=9 00:25:59.761 IO depths : 1=1.0%, 2=2.0%, 4=8.9%, 8=75.7%, 16=12.5%, 32=0.0%, >=64=0.0% 00:25:59.761 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.761 complete : 0=0.0%, 4=89.7%, 8=5.6%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.761 issued rwts: total=2313,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:59.761 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:59.761 filename2: (groupid=0, jobs=1): err= 0: pid=89619: Tue May 14 02:23:12 2024 00:25:59.761 read: IOPS=206, BW=825KiB/s (844kB/s)(8264KiB/10021msec) 00:25:59.761 slat (usec): min=8, max=8023, avg=21.66, stdev=242.37 00:25:59.761 clat (msec): min=32, max=143, avg=77.43, stdev=18.87 00:25:59.761 lat (msec): min=32, max=143, avg=77.45, stdev=18.88 00:25:59.761 clat percentiles (msec): 00:25:59.761 | 1.00th=[ 37], 5.00th=[ 48], 10.00th=[ 61], 20.00th=[ 63], 00:25:59.761 | 30.00th=[ 68], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 77], 00:25:59.761 | 70.00th=[ 87], 80.00th=[ 94], 90.00th=[ 106], 95.00th=[ 112], 00:25:59.761 | 99.00th=[ 131], 99.50th=[ 132], 99.90th=[ 144], 99.95th=[ 144], 00:25:59.761 | 99.99th=[ 144] 00:25:59.761 bw ( KiB/s): min= 640, max= 1024, per=3.76%, avg=819.50, stdev=93.47, samples=20 00:25:59.761 iops : min= 160, max= 256, avg=204.80, stdev=23.42, samples=20 00:25:59.761 lat (msec) : 50=5.95%, 100=82.28%, 250=11.76% 00:25:59.761 cpu : usr=39.11%, sys=1.02%, ctx=1147, majf=0, minf=9 00:25:59.761 IO depths : 1=3.1%, 2=7.2%, 4=18.7%, 8=61.3%, 16=9.6%, 32=0.0%, >=64=0.0% 00:25:59.761 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.761 complete : 0=0.0%, 4=92.2%, 8=2.3%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.761 issued rwts: total=2066,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:59.761 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:59.761 filename2: (groupid=0, jobs=1): err= 0: pid=89620: Tue May 14 02:23:12 2024 00:25:59.761 read: IOPS=198, BW=792KiB/s (811kB/s)(7948KiB/10031msec) 00:25:59.761 slat (usec): min=4, max=8024, avg=31.68, stdev=380.84 00:25:59.761 clat (msec): min=32, max=143, avg=80.58, stdev=19.70 00:25:59.761 lat (msec): min=32, max=143, avg=80.61, stdev=19.69 00:25:59.761 clat percentiles (msec): 00:25:59.761 | 1.00th=[ 33], 5.00th=[ 48], 10.00th=[ 59], 20.00th=[ 64], 00:25:59.761 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 81], 60.00th=[ 85], 00:25:59.761 | 70.00th=[ 91], 80.00th=[ 97], 90.00th=[ 108], 95.00th=[ 112], 00:25:59.761 | 99.00th=[ 123], 99.50th=[ 132], 99.90th=[ 133], 99.95th=[ 144], 00:25:59.761 | 99.99th=[ 144] 00:25:59.761 bw ( KiB/s): min= 640, max= 896, per=3.61%, avg=788.55, stdev=84.64, samples=20 00:25:59.761 iops : min= 160, max= 224, avg=197.05, stdev=21.16, samples=20 00:25:59.761 lat (msec) : 50=6.89%, 100=77.25%, 250=15.85% 00:25:59.761 cpu : usr=36.25%, sys=0.90%, ctx=1001, majf=0, minf=9 00:25:59.761 IO depths : 1=3.1%, 2=6.8%, 4=17.4%, 8=62.9%, 16=9.9%, 32=0.0%, >=64=0.0% 00:25:59.761 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.761 complete : 0=0.0%, 4=92.0%, 8=2.7%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.761 issued rwts: total=1987,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:59.761 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:59.761 00:25:59.761 Run status group 0 (all jobs): 00:25:59.761 READ: bw=21.3MiB/s (22.3MB/s), 792KiB/s-1102KiB/s (811kB/s-1129kB/s), io=214MiB (225MB), run=10002-10058msec 00:25:59.761 02:23:12 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:25:59.761 02:23:12 -- target/dif.sh@43 -- # local sub 00:25:59.761 02:23:12 -- target/dif.sh@45 -- # for sub in "$@" 00:25:59.761 02:23:12 -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:59.761 02:23:12 -- target/dif.sh@36 -- # local sub_id=0 00:25:59.761 02:23:12 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:59.761 02:23:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:59.761 02:23:12 -- common/autotest_common.sh@10 -- # set +x 00:25:59.761 02:23:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:59.761 02:23:12 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:59.761 02:23:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:59.761 02:23:12 -- common/autotest_common.sh@10 -- # set +x 00:25:59.761 02:23:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:59.761 02:23:12 -- target/dif.sh@45 -- # for sub in "$@" 00:25:59.761 02:23:12 -- target/dif.sh@46 -- # destroy_subsystem 1 00:25:59.761 02:23:12 -- target/dif.sh@36 -- # local sub_id=1 00:25:59.761 02:23:12 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:59.761 02:23:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:59.761 02:23:12 -- common/autotest_common.sh@10 -- # set +x 00:25:59.761 02:23:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:59.761 02:23:12 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:25:59.761 02:23:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:59.761 02:23:12 -- common/autotest_common.sh@10 -- # set +x 00:25:59.761 02:23:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:59.761 02:23:12 -- target/dif.sh@45 -- # for sub in "$@" 00:25:59.761 02:23:12 -- target/dif.sh@46 -- # destroy_subsystem 2 00:25:59.761 02:23:12 -- target/dif.sh@36 -- # local sub_id=2 00:25:59.761 02:23:12 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:59.761 02:23:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:59.761 02:23:12 -- common/autotest_common.sh@10 -- # set +x 00:25:59.761 02:23:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:59.761 02:23:12 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:25:59.761 02:23:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:59.761 02:23:12 -- common/autotest_common.sh@10 -- # set +x 00:25:59.761 02:23:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:59.761 02:23:12 -- target/dif.sh@115 -- # NULL_DIF=1 00:25:59.761 02:23:12 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:25:59.761 02:23:12 -- target/dif.sh@115 -- # numjobs=2 00:25:59.761 02:23:12 -- target/dif.sh@115 -- # iodepth=8 00:25:59.761 02:23:12 -- target/dif.sh@115 -- # runtime=5 00:25:59.761 02:23:12 -- target/dif.sh@115 -- # files=1 00:25:59.761 02:23:12 -- target/dif.sh@117 -- # create_subsystems 0 1 00:25:59.761 02:23:12 -- target/dif.sh@28 -- # local sub 00:25:59.761 02:23:12 -- target/dif.sh@30 -- # for sub in "$@" 00:25:59.761 02:23:12 -- target/dif.sh@31 -- # create_subsystem 0 00:25:59.761 02:23:12 -- target/dif.sh@18 -- # local sub_id=0 00:25:59.761 02:23:12 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:59.761 02:23:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:59.761 02:23:12 -- common/autotest_common.sh@10 -- # set +x 00:25:59.761 bdev_null0 00:25:59.761 02:23:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:59.761 02:23:12 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:59.761 02:23:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:59.761 02:23:12 -- common/autotest_common.sh@10 -- # set +x 00:25:59.761 02:23:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:59.761 02:23:12 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:59.761 02:23:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:59.761 02:23:12 -- common/autotest_common.sh@10 -- # set +x 00:25:59.761 02:23:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:59.761 02:23:12 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:59.761 02:23:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:59.761 02:23:12 -- common/autotest_common.sh@10 -- # set +x 00:25:59.761 [2024-05-14 02:23:12.460676] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:59.761 02:23:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:59.761 02:23:12 -- target/dif.sh@30 -- # for sub in "$@" 00:25:59.761 02:23:12 -- target/dif.sh@31 -- # create_subsystem 1 00:25:59.761 02:23:12 -- target/dif.sh@18 -- # local sub_id=1 00:25:59.761 02:23:12 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:25:59.761 02:23:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:59.761 02:23:12 -- common/autotest_common.sh@10 -- # set +x 00:25:59.761 bdev_null1 00:25:59.761 02:23:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:59.761 02:23:12 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:59.762 02:23:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:59.762 02:23:12 -- common/autotest_common.sh@10 -- # set +x 00:25:59.762 02:23:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:59.762 02:23:12 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:59.762 02:23:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:59.762 02:23:12 -- common/autotest_common.sh@10 -- # set +x 00:25:59.762 02:23:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:59.762 02:23:12 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:59.762 02:23:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:59.762 02:23:12 -- common/autotest_common.sh@10 -- # set +x 00:25:59.762 02:23:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:59.762 02:23:12 -- target/dif.sh@118 -- # fio /dev/fd/62 00:25:59.762 02:23:12 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:25:59.762 02:23:12 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:25:59.762 02:23:12 -- nvmf/common.sh@520 -- # config=() 00:25:59.762 02:23:12 -- nvmf/common.sh@520 -- # local subsystem config 00:25:59.762 02:23:12 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:59.762 02:23:12 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:59.762 { 00:25:59.762 "params": { 00:25:59.762 "name": "Nvme$subsystem", 00:25:59.762 "trtype": "$TEST_TRANSPORT", 00:25:59.762 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:59.762 "adrfam": "ipv4", 00:25:59.762 "trsvcid": "$NVMF_PORT", 00:25:59.762 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:59.762 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:59.762 "hdgst": ${hdgst:-false}, 00:25:59.762 "ddgst": ${ddgst:-false} 00:25:59.762 }, 00:25:59.762 "method": "bdev_nvme_attach_controller" 00:25:59.762 } 00:25:59.762 EOF 00:25:59.762 )") 00:25:59.762 02:23:12 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:59.762 02:23:12 -- target/dif.sh@82 -- # gen_fio_conf 00:25:59.762 02:23:12 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:59.762 02:23:12 -- target/dif.sh@54 -- # local file 00:25:59.762 02:23:12 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:25:59.762 02:23:12 -- target/dif.sh@56 -- # cat 00:25:59.762 02:23:12 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:59.762 02:23:12 -- common/autotest_common.sh@1318 -- # local sanitizers 00:25:59.762 02:23:12 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:59.762 02:23:12 -- nvmf/common.sh@542 -- # cat 00:25:59.762 02:23:12 -- common/autotest_common.sh@1320 -- # shift 00:25:59.762 02:23:12 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:25:59.762 02:23:12 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:59.762 02:23:12 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:59.762 02:23:12 -- target/dif.sh@72 -- # (( file <= files )) 00:25:59.762 02:23:12 -- target/dif.sh@73 -- # cat 00:25:59.762 02:23:12 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:59.762 02:23:12 -- common/autotest_common.sh@1324 -- # grep libasan 00:25:59.762 02:23:12 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:59.762 02:23:12 -- target/dif.sh@72 -- # (( file++ )) 00:25:59.762 02:23:12 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:59.762 02:23:12 -- target/dif.sh@72 -- # (( file <= files )) 00:25:59.762 02:23:12 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:59.762 { 00:25:59.762 "params": { 00:25:59.762 "name": "Nvme$subsystem", 00:25:59.762 "trtype": "$TEST_TRANSPORT", 00:25:59.762 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:59.762 "adrfam": "ipv4", 00:25:59.762 "trsvcid": "$NVMF_PORT", 00:25:59.762 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:59.762 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:59.762 "hdgst": ${hdgst:-false}, 00:25:59.762 "ddgst": ${ddgst:-false} 00:25:59.762 }, 00:25:59.762 "method": "bdev_nvme_attach_controller" 00:25:59.762 } 00:25:59.762 EOF 00:25:59.762 )") 00:25:59.762 02:23:12 -- nvmf/common.sh@542 -- # cat 00:25:59.762 02:23:12 -- nvmf/common.sh@544 -- # jq . 00:25:59.762 02:23:12 -- nvmf/common.sh@545 -- # IFS=, 00:25:59.762 02:23:12 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:59.762 "params": { 00:25:59.762 "name": "Nvme0", 00:25:59.762 "trtype": "tcp", 00:25:59.762 "traddr": "10.0.0.2", 00:25:59.762 "adrfam": "ipv4", 00:25:59.762 "trsvcid": "4420", 00:25:59.762 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:59.762 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:59.762 "hdgst": false, 00:25:59.762 "ddgst": false 00:25:59.762 }, 00:25:59.762 "method": "bdev_nvme_attach_controller" 00:25:59.762 },{ 00:25:59.762 "params": { 00:25:59.762 "name": "Nvme1", 00:25:59.762 "trtype": "tcp", 00:25:59.762 "traddr": "10.0.0.2", 00:25:59.762 "adrfam": "ipv4", 00:25:59.762 "trsvcid": "4420", 00:25:59.762 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:59.762 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:59.762 "hdgst": false, 00:25:59.762 "ddgst": false 00:25:59.762 }, 00:25:59.762 "method": "bdev_nvme_attach_controller" 00:25:59.762 }' 00:25:59.762 02:23:12 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:59.762 02:23:12 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:59.762 02:23:12 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:59.762 02:23:12 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:59.762 02:23:12 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:25:59.762 02:23:12 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:59.762 02:23:12 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:59.762 02:23:12 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:59.762 02:23:12 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:59.762 02:23:12 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:59.762 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:25:59.762 ... 00:25:59.762 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:25:59.762 ... 00:25:59.762 fio-3.35 00:25:59.762 Starting 4 threads 00:25:59.762 [2024-05-14 02:23:13.164167] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:59.762 [2024-05-14 02:23:13.165013] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:03.946 00:26:03.946 filename0: (groupid=0, jobs=1): err= 0: pid=89752: Tue May 14 02:23:18 2024 00:26:03.946 read: IOPS=1908, BW=14.9MiB/s (15.6MB/s)(74.6MiB/5001msec) 00:26:03.946 slat (usec): min=7, max=134, avg= 9.54, stdev= 4.00 00:26:03.946 clat (usec): min=1155, max=6653, avg=4145.12, stdev=204.20 00:26:03.946 lat (usec): min=1163, max=6665, avg=4154.66, stdev=204.39 00:26:03.946 clat percentiles (usec): 00:26:03.946 | 1.00th=[ 3818], 5.00th=[ 3949], 10.00th=[ 3982], 20.00th=[ 4015], 00:26:03.946 | 30.00th=[ 4047], 40.00th=[ 4113], 50.00th=[ 4146], 60.00th=[ 4178], 00:26:03.946 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4359], 95.00th=[ 4424], 00:26:03.946 | 99.00th=[ 4555], 99.50th=[ 4621], 99.90th=[ 5997], 99.95th=[ 6587], 00:26:03.946 | 99.99th=[ 6652] 00:26:03.946 bw ( KiB/s): min=14976, max=15728, per=25.11%, avg=15315.56, stdev=235.94, samples=9 00:26:03.946 iops : min= 1872, max= 1966, avg=1914.44, stdev=29.49, samples=9 00:26:03.946 lat (msec) : 2=0.09%, 4=15.08%, 10=84.83% 00:26:03.946 cpu : usr=94.44%, sys=4.30%, ctx=37, majf=0, minf=0 00:26:03.946 IO depths : 1=10.5%, 2=24.4%, 4=50.6%, 8=14.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:03.946 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:03.946 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:03.946 issued rwts: total=9543,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:03.946 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:03.946 filename0: (groupid=0, jobs=1): err= 0: pid=89753: Tue May 14 02:23:18 2024 00:26:03.946 read: IOPS=1905, BW=14.9MiB/s (15.6MB/s)(74.4MiB/5001msec) 00:26:03.946 slat (nsec): min=3683, max=59026, avg=17387.39, stdev=6016.93 00:26:03.946 clat (usec): min=3075, max=5211, avg=4114.12, stdev=152.38 00:26:03.946 lat (usec): min=3087, max=5225, avg=4131.50, stdev=152.99 00:26:03.946 clat percentiles (usec): 00:26:03.946 | 1.00th=[ 3818], 5.00th=[ 3884], 10.00th=[ 3949], 20.00th=[ 3982], 00:26:03.946 | 30.00th=[ 4015], 40.00th=[ 4080], 50.00th=[ 4113], 60.00th=[ 4146], 00:26:03.946 | 70.00th=[ 4178], 80.00th=[ 4228], 90.00th=[ 4293], 95.00th=[ 4359], 00:26:03.946 | 99.00th=[ 4490], 99.50th=[ 4555], 99.90th=[ 4883], 99.95th=[ 4948], 00:26:03.946 | 99.99th=[ 5211] 00:26:03.946 bw ( KiB/s): min=14976, max=15488, per=25.07%, avg=15292.22, stdev=194.81, samples=9 00:26:03.946 iops : min= 1872, max= 1936, avg=1911.44, stdev=24.31, samples=9 00:26:03.946 lat (msec) : 4=22.63%, 10=77.37% 00:26:03.946 cpu : usr=93.64%, sys=5.10%, ctx=8, majf=0, minf=0 00:26:03.946 IO depths : 1=12.3%, 2=25.0%, 4=50.0%, 8=12.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:03.946 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:03.946 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:03.946 issued rwts: total=9528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:03.946 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:03.946 filename1: (groupid=0, jobs=1): err= 0: pid=89754: Tue May 14 02:23:18 2024 00:26:03.946 read: IOPS=1904, BW=14.9MiB/s (15.6MB/s)(74.4MiB/5002msec) 00:26:03.946 slat (nsec): min=7162, max=64140, avg=17213.47, stdev=5979.36 00:26:03.946 clat (usec): min=2103, max=7703, avg=4115.51, stdev=212.40 00:26:03.946 lat (usec): min=2115, max=7728, avg=4132.73, stdev=212.88 00:26:03.946 clat percentiles (usec): 00:26:03.946 | 1.00th=[ 3785], 5.00th=[ 3884], 10.00th=[ 3949], 20.00th=[ 3982], 00:26:03.946 | 30.00th=[ 4015], 40.00th=[ 4080], 50.00th=[ 4113], 60.00th=[ 4146], 00:26:03.946 | 70.00th=[ 4178], 80.00th=[ 4228], 90.00th=[ 4293], 95.00th=[ 4359], 00:26:03.946 | 99.00th=[ 4490], 99.50th=[ 4555], 99.90th=[ 6587], 99.95th=[ 7504], 00:26:03.946 | 99.99th=[ 7701] 00:26:03.946 bw ( KiB/s): min=14848, max=15616, per=25.04%, avg=15274.67, stdev=239.47, samples=9 00:26:03.946 iops : min= 1856, max= 1952, avg=1909.33, stdev=29.93, samples=9 00:26:03.946 lat (msec) : 4=22.96%, 10=77.04% 00:26:03.946 cpu : usr=93.70%, sys=5.12%, ctx=25, majf=0, minf=1 00:26:03.946 IO depths : 1=11.9%, 2=25.0%, 4=50.0%, 8=13.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:03.946 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:03.946 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:03.946 issued rwts: total=9528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:03.946 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:03.946 filename1: (groupid=0, jobs=1): err= 0: pid=89755: Tue May 14 02:23:18 2024 00:26:03.946 read: IOPS=1906, BW=14.9MiB/s (15.6MB/s)(74.5MiB/5002msec) 00:26:03.946 slat (usec): min=7, max=300, avg=16.17, stdev= 7.40 00:26:03.946 clat (usec): min=2165, max=6187, avg=4121.97, stdev=172.74 00:26:03.946 lat (usec): min=2179, max=6201, avg=4138.13, stdev=172.65 00:26:03.946 clat percentiles (usec): 00:26:03.946 | 1.00th=[ 3752], 5.00th=[ 3884], 10.00th=[ 3949], 20.00th=[ 3982], 00:26:03.946 | 30.00th=[ 4047], 40.00th=[ 4080], 50.00th=[ 4113], 60.00th=[ 4146], 00:26:03.946 | 70.00th=[ 4228], 80.00th=[ 4228], 90.00th=[ 4293], 95.00th=[ 4359], 00:26:03.946 | 99.00th=[ 4490], 99.50th=[ 4555], 99.90th=[ 5080], 99.95th=[ 5211], 00:26:03.946 | 99.99th=[ 6194] 00:26:03.946 bw ( KiB/s): min=14976, max=15488, per=25.07%, avg=15292.22, stdev=194.81, samples=9 00:26:03.946 iops : min= 1872, max= 1936, avg=1911.44, stdev=24.31, samples=9 00:26:03.946 lat (msec) : 4=21.08%, 10=78.92% 00:26:03.946 cpu : usr=93.44%, sys=4.96%, ctx=73, majf=0, minf=0 00:26:03.946 IO depths : 1=12.2%, 2=25.0%, 4=50.0%, 8=12.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:03.946 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:03.946 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:03.946 issued rwts: total=9536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:03.946 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:03.946 00:26:03.946 Run status group 0 (all jobs): 00:26:03.946 READ: bw=59.6MiB/s (62.5MB/s), 14.9MiB/s-14.9MiB/s (15.6MB/s-15.6MB/s), io=298MiB (312MB), run=5001-5002msec 00:26:03.946 02:23:18 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:26:03.946 02:23:18 -- target/dif.sh@43 -- # local sub 00:26:03.946 02:23:18 -- target/dif.sh@45 -- # for sub in "$@" 00:26:03.946 02:23:18 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:03.946 02:23:18 -- target/dif.sh@36 -- # local sub_id=0 00:26:03.946 02:23:18 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:03.946 02:23:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:03.946 02:23:18 -- common/autotest_common.sh@10 -- # set +x 00:26:03.946 02:23:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:03.946 02:23:18 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:03.946 02:23:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:03.946 02:23:18 -- common/autotest_common.sh@10 -- # set +x 00:26:03.946 02:23:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:03.946 02:23:18 -- target/dif.sh@45 -- # for sub in "$@" 00:26:03.946 02:23:18 -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:03.946 02:23:18 -- target/dif.sh@36 -- # local sub_id=1 00:26:03.946 02:23:18 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:03.946 02:23:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:03.946 02:23:18 -- common/autotest_common.sh@10 -- # set +x 00:26:03.946 02:23:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:03.946 02:23:18 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:03.946 02:23:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:03.946 02:23:18 -- common/autotest_common.sh@10 -- # set +x 00:26:03.946 ************************************ 00:26:03.946 END TEST fio_dif_rand_params 00:26:03.946 ************************************ 00:26:03.946 02:23:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:03.946 00:26:03.946 real 0m23.401s 00:26:03.946 user 2m6.304s 00:26:03.946 sys 0m4.848s 00:26:03.946 02:23:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:03.946 02:23:18 -- common/autotest_common.sh@10 -- # set +x 00:26:04.206 02:23:18 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:26:04.206 02:23:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:04.206 02:23:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:04.206 02:23:18 -- common/autotest_common.sh@10 -- # set +x 00:26:04.206 ************************************ 00:26:04.206 START TEST fio_dif_digest 00:26:04.206 ************************************ 00:26:04.206 02:23:18 -- common/autotest_common.sh@1104 -- # fio_dif_digest 00:26:04.206 02:23:18 -- target/dif.sh@123 -- # local NULL_DIF 00:26:04.206 02:23:18 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:26:04.206 02:23:18 -- target/dif.sh@125 -- # local hdgst ddgst 00:26:04.206 02:23:18 -- target/dif.sh@127 -- # NULL_DIF=3 00:26:04.206 02:23:18 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:26:04.206 02:23:18 -- target/dif.sh@127 -- # numjobs=3 00:26:04.206 02:23:18 -- target/dif.sh@127 -- # iodepth=3 00:26:04.206 02:23:18 -- target/dif.sh@127 -- # runtime=10 00:26:04.206 02:23:18 -- target/dif.sh@128 -- # hdgst=true 00:26:04.206 02:23:18 -- target/dif.sh@128 -- # ddgst=true 00:26:04.206 02:23:18 -- target/dif.sh@130 -- # create_subsystems 0 00:26:04.206 02:23:18 -- target/dif.sh@28 -- # local sub 00:26:04.206 02:23:18 -- target/dif.sh@30 -- # for sub in "$@" 00:26:04.206 02:23:18 -- target/dif.sh@31 -- # create_subsystem 0 00:26:04.206 02:23:18 -- target/dif.sh@18 -- # local sub_id=0 00:26:04.206 02:23:18 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:04.206 02:23:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:04.206 02:23:18 -- common/autotest_common.sh@10 -- # set +x 00:26:04.206 bdev_null0 00:26:04.206 02:23:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:04.206 02:23:18 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:04.206 02:23:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:04.206 02:23:18 -- common/autotest_common.sh@10 -- # set +x 00:26:04.206 02:23:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:04.206 02:23:18 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:04.206 02:23:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:04.206 02:23:18 -- common/autotest_common.sh@10 -- # set +x 00:26:04.206 02:23:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:04.206 02:23:18 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:04.206 02:23:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:04.206 02:23:18 -- common/autotest_common.sh@10 -- # set +x 00:26:04.206 [2024-05-14 02:23:18.604906] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:04.206 02:23:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:04.206 02:23:18 -- target/dif.sh@131 -- # fio /dev/fd/62 00:26:04.206 02:23:18 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:26:04.206 02:23:18 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:04.206 02:23:18 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:04.206 02:23:18 -- nvmf/common.sh@520 -- # config=() 00:26:04.206 02:23:18 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:04.206 02:23:18 -- nvmf/common.sh@520 -- # local subsystem config 00:26:04.206 02:23:18 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:04.206 02:23:18 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:26:04.206 02:23:18 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:04.206 { 00:26:04.206 "params": { 00:26:04.206 "name": "Nvme$subsystem", 00:26:04.206 "trtype": "$TEST_TRANSPORT", 00:26:04.206 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:04.206 "adrfam": "ipv4", 00:26:04.206 "trsvcid": "$NVMF_PORT", 00:26:04.206 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:04.206 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:04.206 "hdgst": ${hdgst:-false}, 00:26:04.206 "ddgst": ${ddgst:-false} 00:26:04.206 }, 00:26:04.206 "method": "bdev_nvme_attach_controller" 00:26:04.206 } 00:26:04.206 EOF 00:26:04.206 )") 00:26:04.206 02:23:18 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:04.206 02:23:18 -- target/dif.sh@82 -- # gen_fio_conf 00:26:04.206 02:23:18 -- common/autotest_common.sh@1318 -- # local sanitizers 00:26:04.206 02:23:18 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:04.206 02:23:18 -- target/dif.sh@54 -- # local file 00:26:04.206 02:23:18 -- common/autotest_common.sh@1320 -- # shift 00:26:04.206 02:23:18 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:26:04.206 02:23:18 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:26:04.206 02:23:18 -- target/dif.sh@56 -- # cat 00:26:04.206 02:23:18 -- nvmf/common.sh@542 -- # cat 00:26:04.206 02:23:18 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:04.206 02:23:18 -- common/autotest_common.sh@1324 -- # grep libasan 00:26:04.206 02:23:18 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:04.206 02:23:18 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:26:04.206 02:23:18 -- target/dif.sh@72 -- # (( file <= files )) 00:26:04.206 02:23:18 -- nvmf/common.sh@544 -- # jq . 00:26:04.206 02:23:18 -- nvmf/common.sh@545 -- # IFS=, 00:26:04.206 02:23:18 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:04.206 "params": { 00:26:04.206 "name": "Nvme0", 00:26:04.206 "trtype": "tcp", 00:26:04.206 "traddr": "10.0.0.2", 00:26:04.206 "adrfam": "ipv4", 00:26:04.206 "trsvcid": "4420", 00:26:04.206 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:04.206 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:04.206 "hdgst": true, 00:26:04.206 "ddgst": true 00:26:04.206 }, 00:26:04.206 "method": "bdev_nvme_attach_controller" 00:26:04.206 }' 00:26:04.206 02:23:18 -- common/autotest_common.sh@1324 -- # asan_lib= 00:26:04.206 02:23:18 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:26:04.206 02:23:18 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:26:04.206 02:23:18 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:04.206 02:23:18 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:26:04.206 02:23:18 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:26:04.206 02:23:18 -- common/autotest_common.sh@1324 -- # asan_lib= 00:26:04.206 02:23:18 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:26:04.206 02:23:18 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:04.206 02:23:18 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:04.466 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:04.466 ... 00:26:04.466 fio-3.35 00:26:04.466 Starting 3 threads 00:26:04.724 [2024-05-14 02:23:19.167873] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:04.724 [2024-05-14 02:23:19.167959] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:16.934 00:26:16.934 filename0: (groupid=0, jobs=1): err= 0: pid=89861: Tue May 14 02:23:29 2024 00:26:16.934 read: IOPS=224, BW=28.0MiB/s (29.4MB/s)(281MiB/10004msec) 00:26:16.934 slat (nsec): min=8095, max=48710, avg=14152.11, stdev=5204.04 00:26:16.934 clat (usec): min=10021, max=54563, avg=13355.95, stdev=1685.11 00:26:16.934 lat (usec): min=10040, max=54576, avg=13370.10, stdev=1685.10 00:26:16.934 clat percentiles (usec): 00:26:16.934 | 1.00th=[11338], 5.00th=[11863], 10.00th=[12256], 20.00th=[12649], 00:26:16.934 | 30.00th=[12911], 40.00th=[13173], 50.00th=[13304], 60.00th=[13566], 00:26:16.934 | 70.00th=[13698], 80.00th=[13960], 90.00th=[14222], 95.00th=[14484], 00:26:16.934 | 99.00th=[15139], 99.50th=[15270], 99.90th=[53216], 99.95th=[54264], 00:26:16.934 | 99.99th=[54789] 00:26:16.934 bw ( KiB/s): min=26112, max=29696, per=38.65%, avg=28703.37, stdev=731.99, samples=19 00:26:16.934 iops : min= 204, max= 232, avg=224.21, stdev= 5.73, samples=19 00:26:16.934 lat (msec) : 20=99.87%, 100=0.13% 00:26:16.934 cpu : usr=91.35%, sys=7.23%, ctx=10, majf=0, minf=0 00:26:16.934 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:16.934 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:16.934 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:16.934 issued rwts: total=2244,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:16.934 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:16.934 filename0: (groupid=0, jobs=1): err= 0: pid=89862: Tue May 14 02:23:29 2024 00:26:16.934 read: IOPS=198, BW=24.8MiB/s (26.0MB/s)(248MiB/10005msec) 00:26:16.934 slat (usec): min=7, max=293, avg=14.86, stdev= 9.47 00:26:16.934 clat (usec): min=8159, max=18944, avg=15106.02, stdev=1156.12 00:26:16.934 lat (usec): min=8172, max=18957, avg=15120.88, stdev=1156.15 00:26:16.934 clat percentiles (usec): 00:26:16.934 | 1.00th=[12125], 5.00th=[13304], 10.00th=[13698], 20.00th=[14222], 00:26:16.934 | 30.00th=[14615], 40.00th=[14877], 50.00th=[15139], 60.00th=[15401], 00:26:16.934 | 70.00th=[15664], 80.00th=[16057], 90.00th=[16450], 95.00th=[16909], 00:26:16.934 | 99.00th=[17695], 99.50th=[17957], 99.90th=[18744], 99.95th=[19006], 00:26:16.934 | 99.99th=[19006] 00:26:16.934 bw ( KiB/s): min=24576, max=26368, per=34.19%, avg=25392.42, stdev=550.95, samples=19 00:26:16.934 iops : min= 192, max= 206, avg=198.32, stdev= 4.33, samples=19 00:26:16.934 lat (msec) : 10=0.40%, 20=99.60% 00:26:16.934 cpu : usr=92.51%, sys=5.79%, ctx=109, majf=0, minf=9 00:26:16.934 IO depths : 1=3.9%, 2=96.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:16.934 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:16.934 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:16.934 issued rwts: total=1984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:16.934 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:16.934 filename0: (groupid=0, jobs=1): err= 0: pid=89863: Tue May 14 02:23:29 2024 00:26:16.934 read: IOPS=157, BW=19.7MiB/s (20.7MB/s)(197MiB/10005msec) 00:26:16.934 slat (nsec): min=7989, max=45369, avg=13938.63, stdev=4669.03 00:26:16.934 clat (usec): min=6864, max=21916, avg=19013.29, stdev=1163.29 00:26:16.934 lat (usec): min=6876, max=21940, avg=19027.22, stdev=1163.52 00:26:16.934 clat percentiles (usec): 00:26:16.934 | 1.00th=[16319], 5.00th=[17433], 10.00th=[17957], 20.00th=[18220], 00:26:16.934 | 30.00th=[18482], 40.00th=[18744], 50.00th=[19006], 60.00th=[19268], 00:26:16.934 | 70.00th=[19530], 80.00th=[19792], 90.00th=[20317], 95.00th=[20579], 00:26:16.934 | 99.00th=[21365], 99.50th=[21627], 99.90th=[21890], 99.95th=[21890], 00:26:16.934 | 99.99th=[21890] 00:26:16.935 bw ( KiB/s): min=19712, max=21248, per=27.14%, avg=20154.53, stdev=361.72, samples=19 00:26:16.935 iops : min= 154, max= 166, avg=157.42, stdev= 2.85, samples=19 00:26:16.935 lat (msec) : 10=0.06%, 20=83.70%, 50=16.23% 00:26:16.935 cpu : usr=93.55%, sys=5.23%, ctx=13, majf=0, minf=9 00:26:16.935 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:16.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:16.935 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:16.935 issued rwts: total=1577,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:16.935 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:16.935 00:26:16.935 Run status group 0 (all jobs): 00:26:16.935 READ: bw=72.5MiB/s (76.0MB/s), 19.7MiB/s-28.0MiB/s (20.7MB/s-29.4MB/s), io=726MiB (761MB), run=10004-10005msec 00:26:16.935 02:23:29 -- target/dif.sh@132 -- # destroy_subsystems 0 00:26:16.935 02:23:29 -- target/dif.sh@43 -- # local sub 00:26:16.935 02:23:29 -- target/dif.sh@45 -- # for sub in "$@" 00:26:16.935 02:23:29 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:16.935 02:23:29 -- target/dif.sh@36 -- # local sub_id=0 00:26:16.935 02:23:29 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:16.935 02:23:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:16.935 02:23:29 -- common/autotest_common.sh@10 -- # set +x 00:26:16.935 02:23:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:16.935 02:23:29 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:16.935 02:23:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:16.935 02:23:29 -- common/autotest_common.sh@10 -- # set +x 00:26:16.935 02:23:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:16.935 ************************************ 00:26:16.935 END TEST fio_dif_digest 00:26:16.935 ************************************ 00:26:16.935 00:26:16.935 real 0m10.923s 00:26:16.935 user 0m28.367s 00:26:16.935 sys 0m2.053s 00:26:16.935 02:23:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:16.935 02:23:29 -- common/autotest_common.sh@10 -- # set +x 00:26:16.935 02:23:29 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:26:16.935 02:23:29 -- target/dif.sh@147 -- # nvmftestfini 00:26:16.935 02:23:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:16.935 02:23:29 -- nvmf/common.sh@116 -- # sync 00:26:16.935 02:23:29 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:16.935 02:23:29 -- nvmf/common.sh@119 -- # set +e 00:26:16.935 02:23:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:16.935 02:23:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:16.935 rmmod nvme_tcp 00:26:16.935 rmmod nvme_fabrics 00:26:16.935 rmmod nvme_keyring 00:26:16.935 02:23:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:16.935 02:23:29 -- nvmf/common.sh@123 -- # set -e 00:26:16.935 02:23:29 -- nvmf/common.sh@124 -- # return 0 00:26:16.935 02:23:29 -- nvmf/common.sh@477 -- # '[' -n 89101 ']' 00:26:16.935 02:23:29 -- nvmf/common.sh@478 -- # killprocess 89101 00:26:16.935 02:23:29 -- common/autotest_common.sh@926 -- # '[' -z 89101 ']' 00:26:16.935 02:23:29 -- common/autotest_common.sh@930 -- # kill -0 89101 00:26:16.935 02:23:29 -- common/autotest_common.sh@931 -- # uname 00:26:16.935 02:23:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:16.935 02:23:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 89101 00:26:16.935 killing process with pid 89101 00:26:16.935 02:23:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:16.935 02:23:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:16.935 02:23:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 89101' 00:26:16.935 02:23:29 -- common/autotest_common.sh@945 -- # kill 89101 00:26:16.935 02:23:29 -- common/autotest_common.sh@950 -- # wait 89101 00:26:16.935 02:23:29 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:26:16.935 02:23:29 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:16.935 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:16.935 Waiting for block devices as requested 00:26:16.935 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:26:16.935 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:26:16.935 02:23:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:16.935 02:23:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:16.935 02:23:30 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:16.935 02:23:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:16.935 02:23:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:16.935 02:23:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:16.935 02:23:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:16.935 02:23:30 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:26:16.935 00:26:16.935 real 0m59.637s 00:26:16.935 user 3m51.104s 00:26:16.935 sys 0m14.795s 00:26:16.935 02:23:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:16.935 02:23:30 -- common/autotest_common.sh@10 -- # set +x 00:26:16.935 ************************************ 00:26:16.935 END TEST nvmf_dif 00:26:16.935 ************************************ 00:26:16.935 02:23:30 -- spdk/autotest.sh@301 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:16.935 02:23:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:16.935 02:23:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:16.935 02:23:30 -- common/autotest_common.sh@10 -- # set +x 00:26:16.935 ************************************ 00:26:16.935 START TEST nvmf_abort_qd_sizes 00:26:16.935 ************************************ 00:26:16.935 02:23:30 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:16.935 * Looking for test storage... 00:26:16.935 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:16.935 02:23:30 -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:16.935 02:23:30 -- nvmf/common.sh@7 -- # uname -s 00:26:16.935 02:23:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:16.935 02:23:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:16.935 02:23:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:16.935 02:23:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:16.935 02:23:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:16.935 02:23:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:16.935 02:23:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:16.935 02:23:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:16.935 02:23:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:16.935 02:23:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:16.935 02:23:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:26:16.935 02:23:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=01bebc16-ee64-4b1b-82ac-462e1640a9a9 00:26:16.935 02:23:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:16.935 02:23:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:16.935 02:23:30 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:16.935 02:23:30 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:16.935 02:23:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:16.935 02:23:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:16.935 02:23:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:16.935 02:23:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:16.935 02:23:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:16.935 02:23:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:16.935 02:23:30 -- paths/export.sh@5 -- # export PATH 00:26:16.935 02:23:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:16.935 02:23:30 -- nvmf/common.sh@46 -- # : 0 00:26:16.935 02:23:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:16.935 02:23:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:16.935 02:23:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:16.935 02:23:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:16.935 02:23:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:16.935 02:23:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:16.935 02:23:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:16.935 02:23:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:16.935 02:23:30 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:26:16.935 02:23:30 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:16.935 02:23:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:16.935 02:23:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:16.935 02:23:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:16.935 02:23:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:16.935 02:23:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:16.935 02:23:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:16.935 02:23:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:16.935 02:23:30 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:26:16.935 02:23:30 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:26:16.935 02:23:30 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:26:16.935 02:23:30 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:26:16.935 02:23:30 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:26:16.935 02:23:30 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:26:16.935 02:23:30 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:16.935 02:23:30 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:16.935 02:23:30 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:16.935 02:23:30 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:26:16.935 02:23:30 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:16.935 02:23:30 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:16.935 02:23:30 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:16.935 02:23:30 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:16.935 02:23:30 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:16.935 02:23:30 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:16.935 02:23:30 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:16.936 02:23:30 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:16.936 02:23:30 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:26:16.936 02:23:30 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:26:16.936 Cannot find device "nvmf_tgt_br" 00:26:16.936 02:23:30 -- nvmf/common.sh@154 -- # true 00:26:16.936 02:23:30 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:26:16.936 Cannot find device "nvmf_tgt_br2" 00:26:16.936 02:23:30 -- nvmf/common.sh@155 -- # true 00:26:16.936 02:23:30 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:26:16.936 02:23:30 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:26:16.936 Cannot find device "nvmf_tgt_br" 00:26:16.936 02:23:30 -- nvmf/common.sh@157 -- # true 00:26:16.936 02:23:30 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:26:16.936 Cannot find device "nvmf_tgt_br2" 00:26:16.936 02:23:30 -- nvmf/common.sh@158 -- # true 00:26:16.936 02:23:30 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:26:16.936 02:23:30 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:26:16.936 02:23:30 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:16.936 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:16.936 02:23:30 -- nvmf/common.sh@161 -- # true 00:26:16.936 02:23:30 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:16.936 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:16.936 02:23:30 -- nvmf/common.sh@162 -- # true 00:26:16.936 02:23:30 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:26:16.936 02:23:30 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:16.936 02:23:30 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:16.936 02:23:30 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:16.936 02:23:30 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:16.936 02:23:30 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:16.936 02:23:30 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:16.936 02:23:30 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:16.936 02:23:30 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:16.936 02:23:30 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:26:16.936 02:23:30 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:26:16.936 02:23:30 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:26:16.936 02:23:30 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:26:16.936 02:23:30 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:16.936 02:23:30 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:16.936 02:23:30 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:16.936 02:23:30 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:26:16.936 02:23:30 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:26:16.936 02:23:30 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:26:16.936 02:23:30 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:16.936 02:23:30 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:16.936 02:23:30 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:16.936 02:23:30 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:16.936 02:23:30 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:26:16.936 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:16.936 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:26:16.936 00:26:16.936 --- 10.0.0.2 ping statistics --- 00:26:16.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:16.936 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:26:16.936 02:23:30 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:26:16.936 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:16.936 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:26:16.936 00:26:16.936 --- 10.0.0.3 ping statistics --- 00:26:16.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:16.936 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:26:16.936 02:23:30 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:16.936 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:16.936 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:26:16.936 00:26:16.936 --- 10.0.0.1 ping statistics --- 00:26:16.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:16.936 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:26:16.936 02:23:30 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:16.936 02:23:30 -- nvmf/common.sh@421 -- # return 0 00:26:16.936 02:23:30 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:26:16.936 02:23:30 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:17.195 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:17.453 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:26:17.453 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:26:17.453 02:23:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:17.453 02:23:31 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:17.453 02:23:31 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:17.453 02:23:31 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:17.453 02:23:31 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:17.453 02:23:31 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:17.453 02:23:31 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:26:17.453 02:23:31 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:17.453 02:23:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:17.453 02:23:31 -- common/autotest_common.sh@10 -- # set +x 00:26:17.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:17.453 02:23:31 -- nvmf/common.sh@469 -- # nvmfpid=90454 00:26:17.453 02:23:31 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:26:17.453 02:23:31 -- nvmf/common.sh@470 -- # waitforlisten 90454 00:26:17.453 02:23:31 -- common/autotest_common.sh@819 -- # '[' -z 90454 ']' 00:26:17.454 02:23:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:17.454 02:23:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:17.454 02:23:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:17.454 02:23:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:17.454 02:23:31 -- common/autotest_common.sh@10 -- # set +x 00:26:17.454 [2024-05-14 02:23:32.010182] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:17.454 [2024-05-14 02:23:32.010268] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:17.727 [2024-05-14 02:23:32.153026] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:17.727 [2024-05-14 02:23:32.229869] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:17.727 [2024-05-14 02:23:32.230304] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:17.727 [2024-05-14 02:23:32.230465] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:17.727 [2024-05-14 02:23:32.230706] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:17.727 [2024-05-14 02:23:32.230940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:17.727 [2024-05-14 02:23:32.231174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:17.727 [2024-05-14 02:23:32.231174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:17.727 [2024-05-14 02:23:32.231071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:18.688 02:23:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:18.688 02:23:32 -- common/autotest_common.sh@852 -- # return 0 00:26:18.688 02:23:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:18.688 02:23:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:18.688 02:23:32 -- common/autotest_common.sh@10 -- # set +x 00:26:18.688 02:23:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:18.688 02:23:33 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:26:18.688 02:23:33 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:26:18.688 02:23:33 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:26:18.688 02:23:33 -- scripts/common.sh@311 -- # local bdf bdfs 00:26:18.688 02:23:33 -- scripts/common.sh@312 -- # local nvmes 00:26:18.688 02:23:33 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:26:18.688 02:23:33 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:26:18.688 02:23:33 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:26:18.688 02:23:33 -- scripts/common.sh@297 -- # local bdf= 00:26:18.688 02:23:33 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:26:18.688 02:23:33 -- scripts/common.sh@232 -- # local class 00:26:18.688 02:23:33 -- scripts/common.sh@233 -- # local subclass 00:26:18.688 02:23:33 -- scripts/common.sh@234 -- # local progif 00:26:18.688 02:23:33 -- scripts/common.sh@235 -- # printf %02x 1 00:26:18.688 02:23:33 -- scripts/common.sh@235 -- # class=01 00:26:18.688 02:23:33 -- scripts/common.sh@236 -- # printf %02x 8 00:26:18.688 02:23:33 -- scripts/common.sh@236 -- # subclass=08 00:26:18.688 02:23:33 -- scripts/common.sh@237 -- # printf %02x 2 00:26:18.688 02:23:33 -- scripts/common.sh@237 -- # progif=02 00:26:18.688 02:23:33 -- scripts/common.sh@239 -- # hash lspci 00:26:18.688 02:23:33 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:26:18.688 02:23:33 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:26:18.688 02:23:33 -- scripts/common.sh@242 -- # grep -i -- -p02 00:26:18.688 02:23:33 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:26:18.688 02:23:33 -- scripts/common.sh@244 -- # tr -d '"' 00:26:18.688 02:23:33 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:26:18.688 02:23:33 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:26:18.688 02:23:33 -- scripts/common.sh@15 -- # local i 00:26:18.688 02:23:33 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:26:18.688 02:23:33 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:26:18.688 02:23:33 -- scripts/common.sh@24 -- # return 0 00:26:18.688 02:23:33 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:26:18.688 02:23:33 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:26:18.688 02:23:33 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:26:18.688 02:23:33 -- scripts/common.sh@15 -- # local i 00:26:18.688 02:23:33 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:26:18.688 02:23:33 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:26:18.688 02:23:33 -- scripts/common.sh@24 -- # return 0 00:26:18.688 02:23:33 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:26:18.688 02:23:33 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:26:18.688 02:23:33 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:26:18.688 02:23:33 -- scripts/common.sh@322 -- # uname -s 00:26:18.688 02:23:33 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:26:18.688 02:23:33 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:26:18.688 02:23:33 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:26:18.688 02:23:33 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:26:18.688 02:23:33 -- scripts/common.sh@322 -- # uname -s 00:26:18.688 02:23:33 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:26:18.688 02:23:33 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:26:18.688 02:23:33 -- scripts/common.sh@327 -- # (( 2 )) 00:26:18.688 02:23:33 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:26:18.688 02:23:33 -- target/abort_qd_sizes.sh@79 -- # (( 2 > 0 )) 00:26:18.688 02:23:33 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:00:06.0 00:26:18.688 02:23:33 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:26:18.688 02:23:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:18.688 02:23:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:18.688 02:23:33 -- common/autotest_common.sh@10 -- # set +x 00:26:18.688 ************************************ 00:26:18.688 START TEST spdk_target_abort 00:26:18.688 ************************************ 00:26:18.688 02:23:33 -- common/autotest_common.sh@1104 -- # spdk_target 00:26:18.688 02:23:33 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:26:18.688 02:23:33 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:26:18.688 02:23:33 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:06.0 -b spdk_target 00:26:18.688 02:23:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:18.688 02:23:33 -- common/autotest_common.sh@10 -- # set +x 00:26:18.688 spdk_targetn1 00:26:18.688 02:23:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:18.688 02:23:33 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:18.688 02:23:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:18.688 02:23:33 -- common/autotest_common.sh@10 -- # set +x 00:26:18.688 [2024-05-14 02:23:33.161198] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:18.688 02:23:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:18.688 02:23:33 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:26:18.688 02:23:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:18.688 02:23:33 -- common/autotest_common.sh@10 -- # set +x 00:26:18.688 02:23:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:18.688 02:23:33 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:26:18.688 02:23:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:18.688 02:23:33 -- common/autotest_common.sh@10 -- # set +x 00:26:18.688 02:23:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:18.688 02:23:33 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:26:18.688 02:23:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:18.688 02:23:33 -- common/autotest_common.sh@10 -- # set +x 00:26:18.688 [2024-05-14 02:23:33.193353] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:18.688 02:23:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:18.688 02:23:33 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:26:18.688 02:23:33 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:18.688 02:23:33 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:18.688 02:23:33 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:26:18.688 02:23:33 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:18.688 02:23:33 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:26:18.688 02:23:33 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:18.688 02:23:33 -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:18.688 02:23:33 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:18.688 02:23:33 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:18.688 02:23:33 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:18.688 02:23:33 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:18.688 02:23:33 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:18.688 02:23:33 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:18.688 02:23:33 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:26:18.688 02:23:33 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:18.688 02:23:33 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:18.688 02:23:33 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:18.688 02:23:33 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:18.688 02:23:33 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:18.688 02:23:33 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:21.974 Initializing NVMe Controllers 00:26:21.974 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:26:21.974 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:26:21.974 Initialization complete. Launching workers. 00:26:21.974 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 9931, failed: 0 00:26:21.974 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1124, failed to submit 8807 00:26:21.974 success 784, unsuccess 340, failed 0 00:26:21.974 02:23:36 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:21.974 02:23:36 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:25.258 [2024-05-14 02:23:39.663932] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135ac20 is same with the state(5) to be set 00:26:25.258 [2024-05-14 02:23:39.664018] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135ac20 is same with the state(5) to be set 00:26:25.258 [2024-05-14 02:23:39.664047] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135ac20 is same with the state(5) to be set 00:26:25.258 [2024-05-14 02:23:39.664071] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135ac20 is same with the state(5) to be set 00:26:25.258 Initializing NVMe Controllers 00:26:25.258 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:26:25.258 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:26:25.258 Initialization complete. Launching workers. 00:26:25.258 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 5997, failed: 0 00:26:25.258 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1284, failed to submit 4713 00:26:25.258 success 245, unsuccess 1039, failed 0 00:26:25.258 02:23:39 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:25.258 02:23:39 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:28.542 Initializing NVMe Controllers 00:26:28.542 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:26:28.542 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:26:28.542 Initialization complete. Launching workers. 00:26:28.542 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 29219, failed: 0 00:26:28.542 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2674, failed to submit 26545 00:26:28.542 success 447, unsuccess 2227, failed 0 00:26:28.542 02:23:43 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:26:28.542 02:23:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:28.542 02:23:43 -- common/autotest_common.sh@10 -- # set +x 00:26:28.542 02:23:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:28.542 02:23:43 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:26:28.542 02:23:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:28.542 02:23:43 -- common/autotest_common.sh@10 -- # set +x 00:26:29.106 02:23:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:29.106 02:23:43 -- target/abort_qd_sizes.sh@62 -- # killprocess 90454 00:26:29.106 02:23:43 -- common/autotest_common.sh@926 -- # '[' -z 90454 ']' 00:26:29.106 02:23:43 -- common/autotest_common.sh@930 -- # kill -0 90454 00:26:29.106 02:23:43 -- common/autotest_common.sh@931 -- # uname 00:26:29.106 02:23:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:29.106 02:23:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 90454 00:26:29.106 02:23:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:29.106 02:23:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:29.106 02:23:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 90454' 00:26:29.106 killing process with pid 90454 00:26:29.106 02:23:43 -- common/autotest_common.sh@945 -- # kill 90454 00:26:29.106 02:23:43 -- common/autotest_common.sh@950 -- # wait 90454 00:26:29.106 00:26:29.106 real 0m10.591s 00:26:29.106 user 0m43.209s 00:26:29.106 sys 0m1.755s 00:26:29.106 02:23:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:29.106 02:23:43 -- common/autotest_common.sh@10 -- # set +x 00:26:29.106 ************************************ 00:26:29.106 END TEST spdk_target_abort 00:26:29.106 ************************************ 00:26:29.363 02:23:43 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:26:29.363 02:23:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:29.363 02:23:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:29.363 02:23:43 -- common/autotest_common.sh@10 -- # set +x 00:26:29.363 ************************************ 00:26:29.363 START TEST kernel_target_abort 00:26:29.363 ************************************ 00:26:29.363 02:23:43 -- common/autotest_common.sh@1104 -- # kernel_target 00:26:29.363 02:23:43 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:26:29.363 02:23:43 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:26:29.363 02:23:43 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:26:29.363 02:23:43 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:26:29.363 02:23:43 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:26:29.363 02:23:43 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:26:29.363 02:23:43 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:29.363 02:23:43 -- nvmf/common.sh@627 -- # local block nvme 00:26:29.363 02:23:43 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:26:29.363 02:23:43 -- nvmf/common.sh@630 -- # modprobe nvmet 00:26:29.363 02:23:43 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:29.363 02:23:43 -- nvmf/common.sh@635 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:29.620 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:29.620 Waiting for block devices as requested 00:26:29.620 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:26:29.878 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:26:29.878 02:23:44 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:26:29.878 02:23:44 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:29.878 02:23:44 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:26:29.878 02:23:44 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:26:29.878 02:23:44 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:26:29.878 No valid GPT data, bailing 00:26:29.878 02:23:44 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:29.878 02:23:44 -- scripts/common.sh@393 -- # pt= 00:26:29.878 02:23:44 -- scripts/common.sh@394 -- # return 1 00:26:29.878 02:23:44 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:26:29.878 02:23:44 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:26:29.878 02:23:44 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n1 ]] 00:26:29.878 02:23:44 -- nvmf/common.sh@640 -- # block_in_use nvme1n1 00:26:29.878 02:23:44 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:26:29.878 02:23:44 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:26:29.878 No valid GPT data, bailing 00:26:29.878 02:23:44 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:26:29.878 02:23:44 -- scripts/common.sh@393 -- # pt= 00:26:29.878 02:23:44 -- scripts/common.sh@394 -- # return 1 00:26:29.878 02:23:44 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n1 00:26:29.878 02:23:44 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:26:29.878 02:23:44 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n2 ]] 00:26:29.878 02:23:44 -- nvmf/common.sh@640 -- # block_in_use nvme1n2 00:26:29.878 02:23:44 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:26:29.878 02:23:44 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:26:30.136 No valid GPT data, bailing 00:26:30.136 02:23:44 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:26:30.136 02:23:44 -- scripts/common.sh@393 -- # pt= 00:26:30.136 02:23:44 -- scripts/common.sh@394 -- # return 1 00:26:30.136 02:23:44 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n2 00:26:30.136 02:23:44 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:26:30.136 02:23:44 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n3 ]] 00:26:30.136 02:23:44 -- nvmf/common.sh@640 -- # block_in_use nvme1n3 00:26:30.136 02:23:44 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:26:30.136 02:23:44 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:26:30.136 No valid GPT data, bailing 00:26:30.136 02:23:44 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:26:30.136 02:23:44 -- scripts/common.sh@393 -- # pt= 00:26:30.136 02:23:44 -- scripts/common.sh@394 -- # return 1 00:26:30.136 02:23:44 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n3 00:26:30.136 02:23:44 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme1n3 ]] 00:26:30.136 02:23:44 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:26:30.136 02:23:44 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:26:30.136 02:23:44 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:30.136 02:23:44 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:26:30.136 02:23:44 -- nvmf/common.sh@654 -- # echo 1 00:26:30.136 02:23:44 -- nvmf/common.sh@655 -- # echo /dev/nvme1n3 00:26:30.136 02:23:44 -- nvmf/common.sh@656 -- # echo 1 00:26:30.136 02:23:44 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:26:30.136 02:23:44 -- nvmf/common.sh@663 -- # echo tcp 00:26:30.136 02:23:44 -- nvmf/common.sh@664 -- # echo 4420 00:26:30.136 02:23:44 -- nvmf/common.sh@665 -- # echo ipv4 00:26:30.136 02:23:44 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:30.136 02:23:44 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01bebc16-ee64-4b1b-82ac-462e1640a9a9 --hostid=01bebc16-ee64-4b1b-82ac-462e1640a9a9 -a 10.0.0.1 -t tcp -s 4420 00:26:30.136 00:26:30.136 Discovery Log Number of Records 2, Generation counter 2 00:26:30.136 =====Discovery Log Entry 0====== 00:26:30.136 trtype: tcp 00:26:30.136 adrfam: ipv4 00:26:30.136 subtype: current discovery subsystem 00:26:30.136 treq: not specified, sq flow control disable supported 00:26:30.136 portid: 1 00:26:30.136 trsvcid: 4420 00:26:30.136 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:30.136 traddr: 10.0.0.1 00:26:30.136 eflags: none 00:26:30.136 sectype: none 00:26:30.136 =====Discovery Log Entry 1====== 00:26:30.136 trtype: tcp 00:26:30.136 adrfam: ipv4 00:26:30.136 subtype: nvme subsystem 00:26:30.136 treq: not specified, sq flow control disable supported 00:26:30.136 portid: 1 00:26:30.136 trsvcid: 4420 00:26:30.136 subnqn: kernel_target 00:26:30.136 traddr: 10.0.0.1 00:26:30.136 eflags: none 00:26:30.136 sectype: none 00:26:30.136 02:23:44 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:26:30.136 02:23:44 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:30.136 02:23:44 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:30.136 02:23:44 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:26:30.136 02:23:44 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:30.136 02:23:44 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:26:30.136 02:23:44 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:30.136 02:23:44 -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:30.136 02:23:44 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:30.136 02:23:44 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:30.136 02:23:44 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:30.136 02:23:44 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:30.136 02:23:44 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:30.136 02:23:44 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:30.136 02:23:44 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:26:30.136 02:23:44 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:30.136 02:23:44 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:26:30.136 02:23:44 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:30.136 02:23:44 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:26:30.136 02:23:44 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:30.136 02:23:44 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:26:33.420 Initializing NVMe Controllers 00:26:33.420 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:26:33.420 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:26:33.420 Initialization complete. Launching workers. 00:26:33.420 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 29143, failed: 0 00:26:33.420 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 29143, failed to submit 0 00:26:33.420 success 0, unsuccess 29143, failed 0 00:26:33.420 02:23:47 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:33.420 02:23:47 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:26:36.780 Initializing NVMe Controllers 00:26:36.780 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:26:36.780 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:26:36.780 Initialization complete. Launching workers. 00:26:36.780 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 64248, failed: 0 00:26:36.780 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 26614, failed to submit 37634 00:26:36.780 success 0, unsuccess 26614, failed 0 00:26:36.780 02:23:50 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:36.780 02:23:50 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:26:40.066 Initializing NVMe Controllers 00:26:40.066 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:26:40.066 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:26:40.066 Initialization complete. Launching workers. 00:26:40.066 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 75246, failed: 0 00:26:40.066 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 18782, failed to submit 56464 00:26:40.066 success 0, unsuccess 18782, failed 0 00:26:40.066 02:23:54 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:26:40.066 02:23:54 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:26:40.066 02:23:54 -- nvmf/common.sh@677 -- # echo 0 00:26:40.066 02:23:54 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:26:40.066 02:23:54 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:26:40.066 02:23:54 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:40.066 02:23:54 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:26:40.066 02:23:54 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:26:40.066 02:23:54 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:26:40.066 00:26:40.066 real 0m10.498s 00:26:40.066 user 0m5.397s 00:26:40.066 sys 0m2.442s 00:26:40.066 02:23:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:40.066 02:23:54 -- common/autotest_common.sh@10 -- # set +x 00:26:40.066 ************************************ 00:26:40.066 END TEST kernel_target_abort 00:26:40.066 ************************************ 00:26:40.066 02:23:54 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:26:40.066 02:23:54 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:26:40.066 02:23:54 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:40.066 02:23:54 -- nvmf/common.sh@116 -- # sync 00:26:40.066 02:23:54 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:40.066 02:23:54 -- nvmf/common.sh@119 -- # set +e 00:26:40.066 02:23:54 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:40.066 02:23:54 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:40.066 rmmod nvme_tcp 00:26:40.066 rmmod nvme_fabrics 00:26:40.066 rmmod nvme_keyring 00:26:40.066 02:23:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:40.066 02:23:54 -- nvmf/common.sh@123 -- # set -e 00:26:40.066 02:23:54 -- nvmf/common.sh@124 -- # return 0 00:26:40.066 02:23:54 -- nvmf/common.sh@477 -- # '[' -n 90454 ']' 00:26:40.066 02:23:54 -- nvmf/common.sh@478 -- # killprocess 90454 00:26:40.066 02:23:54 -- common/autotest_common.sh@926 -- # '[' -z 90454 ']' 00:26:40.066 02:23:54 -- common/autotest_common.sh@930 -- # kill -0 90454 00:26:40.066 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (90454) - No such process 00:26:40.066 Process with pid 90454 is not found 00:26:40.066 02:23:54 -- common/autotest_common.sh@953 -- # echo 'Process with pid 90454 is not found' 00:26:40.066 02:23:54 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:26:40.066 02:23:54 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:40.634 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:40.634 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:26:40.634 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:26:40.634 02:23:55 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:40.634 02:23:55 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:40.634 02:23:55 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:40.634 02:23:55 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:40.634 02:23:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:40.634 02:23:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:40.634 02:23:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:40.634 02:23:55 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:26:40.634 ************************************ 00:26:40.634 END TEST nvmf_abort_qd_sizes 00:26:40.634 ************************************ 00:26:40.634 00:26:40.634 real 0m24.597s 00:26:40.634 user 0m49.986s 00:26:40.634 sys 0m5.547s 00:26:40.634 02:23:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:40.634 02:23:55 -- common/autotest_common.sh@10 -- # set +x 00:26:40.634 02:23:55 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:26:40.634 02:23:55 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:26:40.634 02:23:55 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:26:40.634 02:23:55 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:26:40.634 02:23:55 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:26:40.634 02:23:55 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:26:40.634 02:23:55 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:26:40.634 02:23:55 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:26:40.634 02:23:55 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:26:40.634 02:23:55 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:26:40.634 02:23:55 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:26:40.634 02:23:55 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:26:40.634 02:23:55 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:26:40.634 02:23:55 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:26:40.634 02:23:55 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:26:40.634 02:23:55 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:26:40.634 02:23:55 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:26:40.634 02:23:55 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:40.634 02:23:55 -- common/autotest_common.sh@10 -- # set +x 00:26:40.634 02:23:55 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:26:40.634 02:23:55 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:26:40.634 02:23:55 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:26:40.634 02:23:55 -- common/autotest_common.sh@10 -- # set +x 00:26:42.537 INFO: APP EXITING 00:26:42.537 INFO: killing all VMs 00:26:42.537 INFO: killing vhost app 00:26:42.537 INFO: EXIT DONE 00:26:43.104 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:43.104 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:26:43.104 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:26:44.042 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:44.042 Cleaning 00:26:44.042 Removing: /var/run/dpdk/spdk0/config 00:26:44.042 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:26:44.042 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:26:44.042 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:26:44.042 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:26:44.042 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:26:44.042 Removing: /var/run/dpdk/spdk0/hugepage_info 00:26:44.042 Removing: /var/run/dpdk/spdk1/config 00:26:44.042 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:26:44.042 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:26:44.042 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:26:44.042 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:26:44.042 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:26:44.042 Removing: /var/run/dpdk/spdk1/hugepage_info 00:26:44.042 Removing: /var/run/dpdk/spdk2/config 00:26:44.042 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:26:44.042 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:26:44.042 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:26:44.042 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:26:44.042 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:26:44.042 Removing: /var/run/dpdk/spdk2/hugepage_info 00:26:44.042 Removing: /var/run/dpdk/spdk3/config 00:26:44.042 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:26:44.042 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:26:44.042 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:26:44.042 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:26:44.042 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:26:44.042 Removing: /var/run/dpdk/spdk3/hugepage_info 00:26:44.042 Removing: /var/run/dpdk/spdk4/config 00:26:44.042 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:26:44.042 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:26:44.042 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:26:44.042 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:26:44.042 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:26:44.042 Removing: /var/run/dpdk/spdk4/hugepage_info 00:26:44.042 Removing: /dev/shm/nvmf_trace.0 00:26:44.042 Removing: /dev/shm/spdk_tgt_trace.pid55625 00:26:44.042 Removing: /var/run/dpdk/spdk0 00:26:44.042 Removing: /var/run/dpdk/spdk1 00:26:44.042 Removing: /var/run/dpdk/spdk2 00:26:44.042 Removing: /var/run/dpdk/spdk3 00:26:44.042 Removing: /var/run/dpdk/spdk4 00:26:44.042 Removing: /var/run/dpdk/spdk_pid55492 00:26:44.042 Removing: /var/run/dpdk/spdk_pid55625 00:26:44.042 Removing: /var/run/dpdk/spdk_pid55925 00:26:44.042 Removing: /var/run/dpdk/spdk_pid56205 00:26:44.042 Removing: /var/run/dpdk/spdk_pid56380 00:26:44.042 Removing: /var/run/dpdk/spdk_pid56450 00:26:44.042 Removing: /var/run/dpdk/spdk_pid56541 00:26:44.042 Removing: /var/run/dpdk/spdk_pid56630 00:26:44.042 Removing: /var/run/dpdk/spdk_pid56668 00:26:44.042 Removing: /var/run/dpdk/spdk_pid56698 00:26:44.042 Removing: /var/run/dpdk/spdk_pid56759 00:26:44.042 Removing: /var/run/dpdk/spdk_pid56876 00:26:44.042 Removing: /var/run/dpdk/spdk_pid57499 00:26:44.042 Removing: /var/run/dpdk/spdk_pid57563 00:26:44.042 Removing: /var/run/dpdk/spdk_pid57632 00:26:44.042 Removing: /var/run/dpdk/spdk_pid57660 00:26:44.042 Removing: /var/run/dpdk/spdk_pid57739 00:26:44.042 Removing: /var/run/dpdk/spdk_pid57767 00:26:44.042 Removing: /var/run/dpdk/spdk_pid57841 00:26:44.042 Removing: /var/run/dpdk/spdk_pid57873 00:26:44.042 Removing: /var/run/dpdk/spdk_pid57920 00:26:44.042 Removing: /var/run/dpdk/spdk_pid57950 00:26:44.042 Removing: /var/run/dpdk/spdk_pid57996 00:26:44.042 Removing: /var/run/dpdk/spdk_pid58026 00:26:44.042 Removing: /var/run/dpdk/spdk_pid58176 00:26:44.042 Removing: /var/run/dpdk/spdk_pid58207 00:26:44.042 Removing: /var/run/dpdk/spdk_pid58281 00:26:44.042 Removing: /var/run/dpdk/spdk_pid58350 00:26:44.042 Removing: /var/run/dpdk/spdk_pid58375 00:26:44.042 Removing: /var/run/dpdk/spdk_pid58433 00:26:44.042 Removing: /var/run/dpdk/spdk_pid58447 00:26:44.042 Removing: /var/run/dpdk/spdk_pid58482 00:26:44.042 Removing: /var/run/dpdk/spdk_pid58501 00:26:44.042 Removing: /var/run/dpdk/spdk_pid58536 00:26:44.042 Removing: /var/run/dpdk/spdk_pid58550 00:26:44.042 Removing: /var/run/dpdk/spdk_pid58584 00:26:44.042 Removing: /var/run/dpdk/spdk_pid58604 00:26:44.042 Removing: /var/run/dpdk/spdk_pid58633 00:26:44.042 Removing: /var/run/dpdk/spdk_pid58658 00:26:44.042 Removing: /var/run/dpdk/spdk_pid58687 00:26:44.301 Removing: /var/run/dpdk/spdk_pid58705 00:26:44.301 Removing: /var/run/dpdk/spdk_pid58741 00:26:44.301 Removing: /var/run/dpdk/spdk_pid58755 00:26:44.301 Removing: /var/run/dpdk/spdk_pid58795 00:26:44.301 Removing: /var/run/dpdk/spdk_pid58809 00:26:44.301 Removing: /var/run/dpdk/spdk_pid58838 00:26:44.301 Removing: /var/run/dpdk/spdk_pid58863 00:26:44.301 Removing: /var/run/dpdk/spdk_pid58892 00:26:44.301 Removing: /var/run/dpdk/spdk_pid58914 00:26:44.301 Removing: /var/run/dpdk/spdk_pid58948 00:26:44.301 Removing: /var/run/dpdk/spdk_pid58962 00:26:44.301 Removing: /var/run/dpdk/spdk_pid59002 00:26:44.301 Removing: /var/run/dpdk/spdk_pid59016 00:26:44.301 Removing: /var/run/dpdk/spdk_pid59051 00:26:44.301 Removing: /var/run/dpdk/spdk_pid59070 00:26:44.301 Removing: /var/run/dpdk/spdk_pid59099 00:26:44.301 Removing: /var/run/dpdk/spdk_pid59119 00:26:44.301 Removing: /var/run/dpdk/spdk_pid59153 00:26:44.301 Removing: /var/run/dpdk/spdk_pid59173 00:26:44.301 Removing: /var/run/dpdk/spdk_pid59206 00:26:44.301 Removing: /var/run/dpdk/spdk_pid59227 00:26:44.301 Removing: /var/run/dpdk/spdk_pid59256 00:26:44.301 Removing: /var/run/dpdk/spdk_pid59278 00:26:44.301 Removing: /var/run/dpdk/spdk_pid59316 00:26:44.301 Removing: /var/run/dpdk/spdk_pid59333 00:26:44.301 Removing: /var/run/dpdk/spdk_pid59376 00:26:44.301 Removing: /var/run/dpdk/spdk_pid59390 00:26:44.301 Removing: /var/run/dpdk/spdk_pid59429 00:26:44.301 Removing: /var/run/dpdk/spdk_pid59445 00:26:44.301 Removing: /var/run/dpdk/spdk_pid59475 00:26:44.301 Removing: /var/run/dpdk/spdk_pid59544 00:26:44.301 Removing: /var/run/dpdk/spdk_pid59654 00:26:44.301 Removing: /var/run/dpdk/spdk_pid60064 00:26:44.301 Removing: /var/run/dpdk/spdk_pid66736 00:26:44.301 Removing: /var/run/dpdk/spdk_pid67075 00:26:44.301 Removing: /var/run/dpdk/spdk_pid68250 00:26:44.301 Removing: /var/run/dpdk/spdk_pid68626 00:26:44.301 Removing: /var/run/dpdk/spdk_pid68895 00:26:44.301 Removing: /var/run/dpdk/spdk_pid68941 00:26:44.301 Removing: /var/run/dpdk/spdk_pid69200 00:26:44.301 Removing: /var/run/dpdk/spdk_pid69208 00:26:44.301 Removing: /var/run/dpdk/spdk_pid69266 00:26:44.301 Removing: /var/run/dpdk/spdk_pid69319 00:26:44.301 Removing: /var/run/dpdk/spdk_pid69379 00:26:44.301 Removing: /var/run/dpdk/spdk_pid69423 00:26:44.301 Removing: /var/run/dpdk/spdk_pid69425 00:26:44.301 Removing: /var/run/dpdk/spdk_pid69456 00:26:44.301 Removing: /var/run/dpdk/spdk_pid69493 00:26:44.301 Removing: /var/run/dpdk/spdk_pid69495 00:26:44.301 Removing: /var/run/dpdk/spdk_pid69553 00:26:44.301 Removing: /var/run/dpdk/spdk_pid69617 00:26:44.301 Removing: /var/run/dpdk/spdk_pid69672 00:26:44.301 Removing: /var/run/dpdk/spdk_pid69710 00:26:44.301 Removing: /var/run/dpdk/spdk_pid69722 00:26:44.301 Removing: /var/run/dpdk/spdk_pid69743 00:26:44.301 Removing: /var/run/dpdk/spdk_pid70036 00:26:44.301 Removing: /var/run/dpdk/spdk_pid70177 00:26:44.301 Removing: /var/run/dpdk/spdk_pid70429 00:26:44.301 Removing: /var/run/dpdk/spdk_pid70484 00:26:44.301 Removing: /var/run/dpdk/spdk_pid70859 00:26:44.301 Removing: /var/run/dpdk/spdk_pid71385 00:26:44.301 Removing: /var/run/dpdk/spdk_pid71819 00:26:44.301 Removing: /var/run/dpdk/spdk_pid72727 00:26:44.301 Removing: /var/run/dpdk/spdk_pid73701 00:26:44.301 Removing: /var/run/dpdk/spdk_pid73812 00:26:44.301 Removing: /var/run/dpdk/spdk_pid73881 00:26:44.301 Removing: /var/run/dpdk/spdk_pid75335 00:26:44.301 Removing: /var/run/dpdk/spdk_pid75568 00:26:44.301 Removing: /var/run/dpdk/spdk_pid75994 00:26:44.301 Removing: /var/run/dpdk/spdk_pid76105 00:26:44.301 Removing: /var/run/dpdk/spdk_pid76251 00:26:44.301 Removing: /var/run/dpdk/spdk_pid76297 00:26:44.301 Removing: /var/run/dpdk/spdk_pid76341 00:26:44.301 Removing: /var/run/dpdk/spdk_pid76388 00:26:44.301 Removing: /var/run/dpdk/spdk_pid76554 00:26:44.301 Removing: /var/run/dpdk/spdk_pid76708 00:26:44.301 Removing: /var/run/dpdk/spdk_pid76972 00:26:44.301 Removing: /var/run/dpdk/spdk_pid77089 00:26:44.301 Removing: /var/run/dpdk/spdk_pid77509 00:26:44.301 Removing: /var/run/dpdk/spdk_pid77890 00:26:44.301 Removing: /var/run/dpdk/spdk_pid77893 00:26:44.301 Removing: /var/run/dpdk/spdk_pid80115 00:26:44.301 Removing: /var/run/dpdk/spdk_pid80421 00:26:44.560 Removing: /var/run/dpdk/spdk_pid80906 00:26:44.560 Removing: /var/run/dpdk/spdk_pid80913 00:26:44.560 Removing: /var/run/dpdk/spdk_pid81251 00:26:44.560 Removing: /var/run/dpdk/spdk_pid81272 00:26:44.560 Removing: /var/run/dpdk/spdk_pid81286 00:26:44.560 Removing: /var/run/dpdk/spdk_pid81311 00:26:44.560 Removing: /var/run/dpdk/spdk_pid81324 00:26:44.560 Removing: /var/run/dpdk/spdk_pid81473 00:26:44.560 Removing: /var/run/dpdk/spdk_pid81481 00:26:44.560 Removing: /var/run/dpdk/spdk_pid81589 00:26:44.560 Removing: /var/run/dpdk/spdk_pid81591 00:26:44.560 Removing: /var/run/dpdk/spdk_pid81694 00:26:44.560 Removing: /var/run/dpdk/spdk_pid81701 00:26:44.560 Removing: /var/run/dpdk/spdk_pid82142 00:26:44.560 Removing: /var/run/dpdk/spdk_pid82196 00:26:44.560 Removing: /var/run/dpdk/spdk_pid82269 00:26:44.560 Removing: /var/run/dpdk/spdk_pid82328 00:26:44.560 Removing: /var/run/dpdk/spdk_pid82666 00:26:44.560 Removing: /var/run/dpdk/spdk_pid82928 00:26:44.560 Removing: /var/run/dpdk/spdk_pid83419 00:26:44.560 Removing: /var/run/dpdk/spdk_pid83970 00:26:44.560 Removing: /var/run/dpdk/spdk_pid84434 00:26:44.560 Removing: /var/run/dpdk/spdk_pid84505 00:26:44.560 Removing: /var/run/dpdk/spdk_pid84590 00:26:44.560 Removing: /var/run/dpdk/spdk_pid84686 00:26:44.560 Removing: /var/run/dpdk/spdk_pid84815 00:26:44.560 Removing: /var/run/dpdk/spdk_pid84902 00:26:44.560 Removing: /var/run/dpdk/spdk_pid84997 00:26:44.560 Removing: /var/run/dpdk/spdk_pid85083 00:26:44.560 Removing: /var/run/dpdk/spdk_pid85425 00:26:44.560 Removing: /var/run/dpdk/spdk_pid86122 00:26:44.560 Removing: /var/run/dpdk/spdk_pid87477 00:26:44.560 Removing: /var/run/dpdk/spdk_pid87677 00:26:44.560 Removing: /var/run/dpdk/spdk_pid87967 00:26:44.560 Removing: /var/run/dpdk/spdk_pid88266 00:26:44.560 Removing: /var/run/dpdk/spdk_pid88816 00:26:44.560 Removing: /var/run/dpdk/spdk_pid88821 00:26:44.560 Removing: /var/run/dpdk/spdk_pid89176 00:26:44.560 Removing: /var/run/dpdk/spdk_pid89335 00:26:44.560 Removing: /var/run/dpdk/spdk_pid89495 00:26:44.560 Removing: /var/run/dpdk/spdk_pid89591 00:26:44.560 Removing: /var/run/dpdk/spdk_pid89742 00:26:44.560 Removing: /var/run/dpdk/spdk_pid89850 00:26:44.560 Removing: /var/run/dpdk/spdk_pid90529 00:26:44.560 Removing: /var/run/dpdk/spdk_pid90563 00:26:44.560 Removing: /var/run/dpdk/spdk_pid90595 00:26:44.560 Removing: /var/run/dpdk/spdk_pid90841 00:26:44.560 Removing: /var/run/dpdk/spdk_pid90875 00:26:44.560 Removing: /var/run/dpdk/spdk_pid90906 00:26:44.560 Clean 00:26:44.560 killing process with pid 49687 00:26:44.818 killing process with pid 49692 00:26:44.818 02:23:59 -- common/autotest_common.sh@1436 -- # return 0 00:26:44.818 02:23:59 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:26:44.818 02:23:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:44.818 02:23:59 -- common/autotest_common.sh@10 -- # set +x 00:26:44.818 02:23:59 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:26:44.818 02:23:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:44.818 02:23:59 -- common/autotest_common.sh@10 -- # set +x 00:26:44.818 02:23:59 -- spdk/autotest.sh@390 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:26:44.818 02:23:59 -- spdk/autotest.sh@392 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:26:44.818 02:23:59 -- spdk/autotest.sh@392 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:26:44.818 02:23:59 -- spdk/autotest.sh@394 -- # hash lcov 00:26:44.818 02:23:59 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:26:44.818 02:23:59 -- spdk/autotest.sh@396 -- # hostname 00:26:44.818 02:23:59 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1705279005-2131 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:26:45.077 geninfo: WARNING: invalid characters removed from testname! 00:27:17.161 02:24:29 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:19.066 02:24:33 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:22.351 02:24:36 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:25.640 02:24:40 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:28.929 02:24:43 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:31.456 02:24:45 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:34.040 02:24:48 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:27:34.040 02:24:48 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:34.040 02:24:48 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:27:34.040 02:24:48 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:34.040 02:24:48 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:34.040 02:24:48 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.040 02:24:48 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.040 02:24:48 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.040 02:24:48 -- paths/export.sh@5 -- $ export PATH 00:27:34.041 02:24:48 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.041 02:24:48 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:27:34.041 02:24:48 -- common/autobuild_common.sh@435 -- $ date +%s 00:27:34.041 02:24:48 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1715653488.XXXXXX 00:27:34.041 02:24:48 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1715653488.gl5jpV 00:27:34.041 02:24:48 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:27:34.041 02:24:48 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:27:34.041 02:24:48 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:27:34.041 02:24:48 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:27:34.041 02:24:48 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:27:34.041 02:24:48 -- common/autobuild_common.sh@451 -- $ get_config_params 00:27:34.041 02:24:48 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:27:34.041 02:24:48 -- common/autotest_common.sh@10 -- $ set +x 00:27:34.041 02:24:48 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-avahi --with-golang' 00:27:34.041 02:24:48 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:27:34.041 02:24:48 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:27:34.041 02:24:48 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:27:34.041 02:24:48 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:27:34.041 02:24:48 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:27:34.041 02:24:48 -- spdk/autopackage.sh@19 -- $ timing_finish 00:27:34.041 02:24:48 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:27:34.041 02:24:48 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:27:34.041 02:24:48 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:34.041 02:24:48 -- spdk/autopackage.sh@20 -- $ exit 0 00:27:34.041 + [[ -n 5147 ]] 00:27:34.041 + sudo kill 5147 00:27:34.311 [Pipeline] } 00:27:34.331 [Pipeline] // timeout 00:27:34.337 [Pipeline] } 00:27:34.356 [Pipeline] // stage 00:27:34.362 [Pipeline] } 00:27:34.381 [Pipeline] // catchError 00:27:34.392 [Pipeline] stage 00:27:34.395 [Pipeline] { (Stop VM) 00:27:34.411 [Pipeline] sh 00:27:34.692 + vagrant halt 00:27:37.983 ==> default: Halting domain... 00:27:44.557 [Pipeline] sh 00:27:44.834 + vagrant destroy -f 00:27:48.120 ==> default: Removing domain... 00:27:48.132 [Pipeline] sh 00:27:48.413 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/output 00:27:48.423 [Pipeline] } 00:27:48.442 [Pipeline] // stage 00:27:48.448 [Pipeline] } 00:27:48.465 [Pipeline] // dir 00:27:48.472 [Pipeline] } 00:27:48.489 [Pipeline] // wrap 00:27:48.497 [Pipeline] } 00:27:48.513 [Pipeline] // catchError 00:27:48.523 [Pipeline] stage 00:27:48.525 [Pipeline] { (Epilogue) 00:27:48.540 [Pipeline] sh 00:27:48.821 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:27:55.397 [Pipeline] catchError 00:27:55.399 [Pipeline] { 00:27:55.416 [Pipeline] sh 00:27:55.700 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:27:55.959 Artifacts sizes are good 00:27:55.970 [Pipeline] } 00:27:55.989 [Pipeline] // catchError 00:27:56.001 [Pipeline] archiveArtifacts 00:27:56.007 Archiving artifacts 00:27:56.178 [Pipeline] cleanWs 00:27:56.189 [WS-CLEANUP] Deleting project workspace... 00:27:56.189 [WS-CLEANUP] Deferred wipeout is used... 00:27:56.196 [WS-CLEANUP] done 00:27:56.198 [Pipeline] } 00:27:56.215 [Pipeline] // stage 00:27:56.221 [Pipeline] } 00:27:56.238 [Pipeline] // node 00:27:56.246 [Pipeline] End of Pipeline 00:27:56.285 Finished: SUCCESS